Basics of Prompts

Whether in coding, literature search, or data analysis, a well-crafted prompt improves the relevance, accuracy, and quality of the responses. Understanding prompt basics is essential for effectively using generative AI tools, as prompts are the instructions that guide these models to produce desired outputs. By mastering prompt basics, users can communicate more clearly with AI models, setting the foundation for more advanced techniques like prompt engineering, which fine-tunes prompts to achieve even more precise results.

What is a prompt?

A prompt is a piece of text or input given to an AI model to guide its response.

  • Role in Communication: The prompt guides the AI by setting the tone, context, and direction of the response, ensuring the output aligns with the user’s goals.
  • Variability: Prompts can vary in complexity, length, and style, depending on the desired result—ranging from simple questions to detailed instructions.
  • Processing the Input: When a generative AI model receives a prompt, it parses the language to understand the request, identifying key elements and determining the response type, which helps the AI produce relevant and accurate outputs.

Main Types of Prompts

1. Informational Prompts

These prompts ask the AI to provide background information, summaries, or explanations on a specific topic. They are fact-oriented, seeking comprehensive explanations.

2. Instructional Prompts

Instructional prompts guide the AI to perform specific tasks or generate step-by-step outputs. They are task-focused, clear, with defined output goals.

3. Query Prompts

Query prompts are used to ask specific questions, often for fact-finding or clarifying details on particular concepts or methods. They are aimed at obtaining quick, straightforward answers.

Prompt Engineering

Prompt engineering is the practice of designing prompts strategically to achieve precise and relevant outputs from an AI model. By structuring prompts with clear instructions, relevant context, and guiding cues, users can enhance the quality and specificity of AI responses. In bioinformatics, this technique can be used to produce targeted outputs for coding, data analysis, literature summarization, and more.

Five Key Principles of Prompt Engineering:

  1. Giving Instructions
  2. Direct the Models Focus with Primary Content
  3. Providing Examples
  4. Priming the Output with Cues
  5. Evaluate Output Quality

1. Giving Clear Instructions

Instructions direct the AI model on what action to perform, from simple tasks to complex, multi-step requests. The level of detail in instructions impacts the model’s ability to meet specific needs.

  • Less Effective Prompt:

    Summarize the meeting notes.

  • More Effective Prompt:

    Summarize the meeting notes in a single paragraph. Then list the speakers with each of their key points. Finally, outline any next steps or action items suggested by the speakers.

2. Direct the Models Focus with Primary Content

Primary content specifies the main body of text the model is expected to process or transform. When combined with instructions, it helps the AI concentrate on the essential elements of the task.

3. Providing Examples

Examples improve the model’s response accuracy and relevance by setting a template for the desired output. They help ‘condition’ the AI, especially when demonstrating specific types of responses or styles.

  • Less Effective Prompt:

    Create a comprehensive report on the implementation of a new electronic health records (EHR) system.

  • More Effective Prompt:

    Create a comprehensive report on the implementation of a new electronic health records (EHR) system in a mid-sized hospital. For instance, describe how the project team managed budget constraints while ensuring system efficiency, similar to how the XYZ Hospital project was handled.”

4. Priming with Cues

Cues serve as starting points or ‘jumpstarts’ for the AI’s response, setting up the output structure. They help guide the model toward the desired format and flow.

  • Example Cue:

    In 5 concise points, summarize…

5. Evaluate Output Quality

Assessing the AI’s output is crucial to ensure accuracy and usability. Key questions to consider include:

  • Is the information correct and relevant?
  • Is the output unbiased?
  • Is the response clear and suitable for the intended purpose?
  • Could refining the prompt improve the result?

Pulling It All Together

Combining all elements—clear instructions, focused content, examples, and cues—leads to a well-engineered prompt that yields high-quality output.

Prompt Example

[Instructions] Create a comprehensive report on the implementation of a new electronic health records (EHR) system in a mid-sized hospital.
[Primary Content] Focus on key project management aspects such as timeline, budget, stakeholder engagement, and risk management.
[Examples] For instance, describe how the project team managed budget constraints while ensuring system efficiency, similar to how the XYZ Hospital project was handled.
[Cues] Start with an overview of the project scope and objectives.”

Advanced Prompting Techniques

Advanced prompting techniques allow users to communicate more effectively with AI models, leading to more accurate and tailored responses. These strategies can help ensure that the model processes complex requests accurately, provides clear reasoning, and handles repetitive tasks efficiently.

  • Give the Model a Role

    Assigning a role to the model at the beginning of the prompt encourages it to respond in a manner consistent with that role. By setting a specific role, such as “data analyst,” “teacher,” or “research assistant,” the model can better align its responses with the expected expertise or perspective.

  • Specify Steps to Complete the Task

    For tasks that require multiple actions, break down the process into clear, sequential steps. Using bullet points or numbered lists improves clarity, helping the model execute each step in the correct order.

  • Indicate Distinct Parts of Your Prompt

    Dividing your prompt into sections or labeled parts helps the model focus on each component of the request. Adding section titles or ordered steps clarifies the output structure and ensures the response covers all aspects of the prompt.

  • Chain of Thought Prompting

    Chain of Thought (CoT) prompting instructs the model to follow a logical, step-by-step reasoning process. This technique is particularly helpful for complex problem-solving tasks, as it allows the model to “think aloud” and process each step methodically, improving transparency and reducing errors.

    Example CoT Prompts:

    • “Explain this concept step-by-step.”
    • “Break down the reasoning behind each answer.”
    • “Imagine you’re teaching this to someone new. Explain each step in detail.”
    • “Consider each part separately and explain your approach.”
    • “What assumptions are being made? Walk me through your thought process.”

  • Get the Cleanest Responses

    To maximize output quality, consider specifying the desired length and style of the response. Use concise language to avoid unnecessary verbosity, and remember token limits, which cap the length of the model’s response.

    Tips for Clear Responses:
    • Specify output length: “Summarize in three sentences.”
    • Request concise responses: “Answer briefly.”
    • Understand token limits to avoid overly long or cut-off responses.
  • Breaking Down Complex Tasks

    Complex tasks are often best handled by breaking them into smaller, manageable steps. By structuring complex prompts into simpler stages, the model can focus on one part at a time, increasing accuracy and relevance.

Leveraging Templates for Repetitive Tasks

Using templates can help automate repetitive tasks by providing a consistent format. Templates standardize responses and save time, particularly when similar tasks must be performed multiple times.

Template Example: Email Summary

Task: Summarize Email
Content: [Paste the entire email content here.]
Instructions:

  1. Provide a concise summary of the email.
  2. Highlight any key points or action items.
  3. Identify the sender’s primary intent or purpose.
  4. Note specific dates, deadlines, or important details mentioned.
  5. Mention any unclear points or ambiguities in the email.

Supporting Information: [Add any context that may assist in understanding, such as previous related emails or project background.]

Using Generative AI Tools in Research

Generative AI, a branch of artificial intelligence, focuses on creating algorithms capable of generating new data that resembles existing information. This technology is powered by models such as large language models (LLMs) and other deep learning architectures, which can produce text, code, images, and even complex data predictions. With the advancement of generative AI, these tools are now being applied across various research fields, with bioinformatics emerging as one of the areas where they hold transformative potential.

Generative AI tools can streamline bioinformatics processes, making them faster and often more intuitive. For instance, generative AI can assist in automating coding tasks, searching through extensive literature, and identifying patterns within large biological datasets.

Key benefits of generative AI in bioinformatics include:

  • Enhanced Literature Discovery: Streamlines finding relevant studies and identifying research trends.
  • Coding Efficiency and Automation: Simplifies code generation, debugging, and repetitive tasks.
  • Simplified Data Analysis: Facilitates complex data processing and statistical analysis.
  • Visualization and Reporting: Enables creation of visuals and reports without extensive coding expertise.

Literature Search using Generative AI Tools

This section explores how generative AI tools like ResearchRabbit, SCISPACE, Elicit, Litmaps, and Perplexity can streamline the literature search process. These tools help researchers discover relevant studies, track new publications, map connections between topics, and summarize findings with ease. By harnessing AI to organize and synthesize information, researchers in bioinformatics and other fields can conduct thorough literature reviews more efficiently, staying up-to-date with the latest research while focusing on the most relevant content.

ResearchRabbit

ResearchRabbit is a free and powerful AI-driven literature search and discovery tool designed to help researchers explore and organize academic literature more effectively. It connects users to vast databases across multiple fields, particularly excelling in the Biomedical and Life Sciences with access to PubMed as well as all subject areas through Semantic Scholar. ResearchRabbit covers hundreds of millions of academic articles, making it one of the largest literature databases available, second only to Google Scholar.

Literature Discovery

ResearchRabbit makes it easy to find relevant research papers by allowing users to start with a title, Digital Object Identifier (DOI), PubMed Identifier (PMID), keywords, or by connecting to a Zotero collection. Users can also upload a BibTeX or RIS file containing selected papers to begin their search. ResearchRabbit then builds a network of related studies, streamlining discovery and uncovering connections between publications that might not be immediately obvious. Users can further refine their collection by exploring similar studies, earlier foundational work, more recent developments, or filtering by specific authors. This ensures more targeted and relevant results in vast fields of study.

Visualization

ResearchRabbit offers a unique visual representation of search results through an interactive, graph-based layout. This feature helps users understand relationships between studies, identify influential works, and uncover new connections. Users can toggle between a network view and a timeline view to visualize links between their collection and up to 50 related papers. Labels can be customized to display either the first or last author alongside the publication year, enhancing clarity in exploring academic networks.

Personalized Collections

Researchers can create customized collections for different topics, projects, or themes. These collections help in organizing literature in a personalized library that is easy to revisit and expand.

Collaboration

ResearchRabbit allows users to share collections with colleagues, making it easier to collaborate on literature reviews or research projects. This feature enhances team-based work by ensuring everyone has access to the same resources and updates.

Alerts and Updates

Once a user has created a collection, ResearchRabbit will send alerts for newly published studies that are relevant to the topics of interest. This automated alert system ensures researchers stay up-to-date without constantly monitoring databases.

Recommendations

ResearchRabbit uses AI-driven recommendations to suggest additional papers, authors, or topics related to the user’s area of interest. These recommendations help expand literature searches, leading to a more comprehensive understanding of the field.

Challenge

Task

Create a ResearchRabbit collection on the topic “Normalization methods for single cell RNA sequencing data”.

Example ResearchRabbit Collection:

Shareable link: https://www.researchrabbitapp.com/collection/public/NLVW23GMZP

SCISPACE

SCISPACE is an AI-powered platform designed to streamline and enhance the research process by helping users read, interpret, and synthesize complex academic literature. It serves as an all-in-one tool for everything from understanding difficult concepts to generating citations and even creating summaries or paraphrases. It helps researchers quickly summarize, analyze, and interpret complex academic papers, making it easier to digest detailed information and access context-specific explanations using the chat with PDF feature. SCISPACE is especially beneficial for researchers in bioinformatics, where papers often contain complex data, specialized terminology, and dense information.

Chat with PDF

SCISPACE includes an interactive “Chat with PDF” feature, enabling users to ask questions directly about the content of a PDF document. This feature provides immediate, specific answers from the document, facilitating faster comprehension and targeted information retrieval without needing to read the entire paper.

Co-Pilot for Research

SCISPACE functions as a real-time research assistant or “co-pilot,” guiding users through papers by providing explanations, answering questions, and breaking down complex sections. This feature allows users to engage more deeply with the content without extensive background knowledge.

Literature Review Assistance

SCISPACE helps researchers conduct comprehensive literature reviews by quickly summarizing key findings, identifying relevant studies, and suggesting connected works. This feature is ideal for staying up-to-date on recent advances or exploring a new area of study.

AI Writer

This feature assists researchers in generating well-structured, accurate content, whether for drafting sections of a paper, summarizing results, or creating reports. It saves time by producing high-quality text aligned with academic standards.

Find Concepts

SCISPACE can detect and explain core concepts within a document, providing definitions and context that help users understand complex terminology and subject-specific language, particularly useful for interdisciplinary research.

Paraphraser

The paraphrasing tool allows users to rephrase sentences or sections, maintaining the original meaning while avoiding direct quotation. This feature is useful for integrating findings from other sources in a unique way without risking plagiarism.

Citation Generator

SCISPACE generates citations in various styles (APA, MLA, Chicago, etc.), simplifying the citation process and ensuring accuracy. It allows users to cite references directly, saving time when preparing manuscripts or presentations.

Extract Data

This feature enables users to extract specific data points, figures, or tables from research papers, making it easier to gather information for data analysis, systematic reviews, or meta-analyses without manually combing through entire documents.

AI-Detector

The AI-detection tool helps verify content originality by detecting AI-generated text, a useful feature for ensuring the authenticity of research submissions or identifying instances of AI influence in other works.

PDF to Video

SCISPACE can transform PDF documents into video summaries, providing a visual overview of research findings and complex concepts. This feature is particularly beneficial for sharing research with non-expert audiences or creating multimedia presentations.

Challenge

Task

Use SCISPACE chat with PDF feature to review and interpret the concept of the predicted Local Distance Difference Test (pLDDT) score as discussed in the paper “Highly accurate protein structure prediction with AlphaFold”

Understanding pLDDT in Protein Structure Prediction

pLDDT, or predicted local-distance difference test, is a confidence measure used in the AlphaFold protein structure prediction system. Here are the key aspects of pLDDT:

  • Purpose:
    • pLDDT is designed to assess the reliability of the predicted protein structure. It provides a quantitative measure of how confident the model is about the accuracy of its predictions for each residue in the protein structure.
  • Calculation:
    • The pLDDT score is derived from the predicted local distances between Cα atoms in the protein structure. It evaluates the local accuracy of the predicted structure by comparing it to the true structure, which is essential for understanding the model’s performance on a per-residue basis .
  • Interpretation:
    • The pLDDT score ranges from 0 to 100, where higher scores indicate greater confidence in the accuracy of the predicted structure. A score above 70 is generally considered to indicate a reliable prediction, while scores below this threshold suggest lower confidence in the accuracy of the predicted structure.
  • Relation to lDDT-Cα:
    • The pLDDT measure is closely related to the local-distance difference test (lDDT-Cα), which specifically focuses on the accuracy of the backbone structure. The pLDDT reliably predicts the lDDT-Cα accuracy, making it a valuable tool for evaluating the quality of protein structure predictions .
  • Application:
    • Researchers can use pLDDT scores to prioritize which predicted structures to investigate further, as higher scores suggest that the corresponding structures are more likely to be accurate representations of the actual protein conformations.

In summary, pLDDT is a crucial metric in AlphaFold that helps researchers gauge the confidence of protein structure predictions, guiding further analysis and experimental validation.

Calculation of pLDDT Score in AlphaFold

The pLDDT score is a critical metric in the AlphaFold protein structure prediction framework, providing insights into the confidence of the predicted structures. Here’s how the pLDDT score is calculated:

  • Local Distance Predictions:
    • The pLDDT score is based on the predicted distances between Cα atoms in the protein structure. It evaluates how well the predicted distances align with the actual distances observed in known protein structures.
  • Per-Residue Networks:
    • The calculation involves small per-residue networks that analyze the final activations at the end of the AlphaFold network. These networks specifically focus on predicting the local distances and angles for each residue, which are essential for determining the overall structure accuracy .
  • Error Prediction:
    • The pLDDT score is derived from a pairwise error prediction, which is computed as a linear projection from the final pair representation of the predicted structure. This approach allows the model to estimate the local accuracy of each residue based on the predicted distances .
  • Final Score Generation:
    • The resulting pLDDT score is then generated by aggregating the local distance predictions across the entire protein structure. This score reflects the model’s confidence in the accuracy of the predicted structure for each residue, with higher scores indicating greater confidence.
  • Interpretation of Scores:
    • The pLDDT score is typically reported on a scale from 0 to 100, where scores above 70 are generally considered reliable. This scoring system helps researchers assess which parts of the predicted structure are likely to be accurate and which may require further investigation or validation .

In summary, the pLDDT score is calculated through a combination of local distance predictions, per-residue networks, and pairwise error predictions, ultimately providing a confidence measure for the accuracy of protein structure predictions in AlphaFold.

Perplexity

Perplexity is an AI-driven search engine designed to provide concise, accurate answers to user queries by leveraging natural language processing. Perplexity quickly synthesizes information from academic databases, reputable websites, and other trusted sources, making it a valuable tool for researchers who need to find, verify, and understand information efficiently.

  • Instant Answers and Summaries: Perplexity provides direct, concise answers to complex questions, summarizing information from multiple sources into clear and accurate responses. This feature allows researchers to save time by accessing key information without sifting through multiple articles or websites.

  • Source Citations and Verification: Every answer provided by Perplexity includes citations, ensuring users can verify the information and dive deeper into the original sources if needed. This transparency is especially useful for academic and scientific research where source reliability is crucial.

  • Natural Language Search: Perplexity allows users to input queries in everyday language, making it easy to ask detailed or specific questions. The AI interprets complex queries, delivering targeted results that align with the user’s intent, which is helpful for exploring intricate research topics.

  • Multidisciplinary Information Access: Drawing from a variety of reputable databases, academic journals, and authoritative sources, Perplexity covers a broad range of topics. This breadth makes it useful for interdisciplinary research, where insights from multiple fields are often needed.

  • Dynamic Suggestions and Follow-Up Questions: Perplexity offers suggestions for follow-up questions based on initial queries, encouraging users to explore related topics or refine their research focus. This feature facilitates deeper inquiry and expands the researcher’s understanding of a subject.

  • Research Topic Summaries: For researchers exploring new areas, Perplexity provides topic summaries that offer a well-rounded introduction to unfamiliar subjects. These summaries include key concepts, important studies, and common research questions, making it easier to get up-to-speed quickly.

  • Personalized Search History: Perplexity keeps track of past queries, allowing users to revisit previous searches or build on previous inquiries. This feature helps researchers maintain a cohesive research path, especially when working on long-term projects.

Elicit

Elicit is an AI-powered research assistant specifically designed to streamline and enhance the literature review process, helping researchers identify relevant studies, summarize key findings, and organize insights. Built with academics and scientists in mind, Elicit simplifies tasks like data extraction, evidence synthesis, and hypothesis generation.

  • Literature Discovery and Summarization: Elicit assists researchers in finding and summarizing research papers by surfacing relevant studies from databases and providing concise summaries of key information. This feature helps users quickly build an overview of the literature on a specific topic.

  • Customized Search Criteria: Elicit allows users to customize search criteria based on specific questions, such as study population, intervention types, outcome measures, or research design. This targeted search feature ensures that results are aligned with the user’s research goals, particularly useful for systematic reviews and meta-analyses.

  • Evidence Synthesis: Elicit aggregates evidence from multiple studies, summarizing findings across sources to provide a broader understanding of a research question. This synthesis feature is helpful for identifying trends, common findings, or gaps within a particular field.

  • Data Extraction: Elicit can automatically extract key data points from studies, such as sample size, effect size, and statistical significance, making it easier to gather quantitative information for analysis. This feature saves time and reduces the manual work typically involved in data collection.

  • Hypothesis Generation and Testing: Elicit aids researchers in generating and refining hypotheses by summarizing existing evidence, identifying potential causal factors, and suggesting variables to consider. This feature helps structure research questions and informs experimental design.

  • Interactive Q&A: Elicit includes a question-and-answer feature that allows users to ask specific questions about studies or topics. The AI responds with information drawn from available literature, making it easy to clarify details or explore particular aspects of a research question.

  • Causal Analysis: Elicit offers tools for exploring causal relationships within data, identifying patterns and associations that may indicate underlying causes or influential factors. This is particularly valuable in fields where identifying cause-and-effect is crucial, such as epidemiology or clinical research.

  • Comparison of Study Findings: Elicit enables side-by-side comparison of study results, providing a comprehensive view of similarities, differences, and contrasts among research papers. This feature is beneficial for synthesizing evidence across studies and forming well-rounded conclusions.

Litmaps

Litmaps is an AI-powered literature mapping tool designed to help researchers stay updated on new publications, visualize connections between research papers, and organize their reading in a structured and interactive way. By creating personalized, visual maps of literature, Litmaps enables researchers to explore the landscape of a topic, track influential studies, and discover relevant papers.

  • Literature Mapping and Visualization: Litmaps visualizes relationships between research papers, allowing users to create interactive maps that show citation networks and thematic connections. This feature helps researchers identify influential studies, understand how different works are related, and spot gaps in the literature.

  • Personalized Citation Alerts: Litmaps provides personalized alerts for new papers related to a researcher’s topics of interest. Users can set up notification streams based on keywords, authors, or topics, ensuring they stay up-to-date on recent publications that align with their research focus.

  • Real-Time Updates: Litmaps offers real-time tracking of citation activity, showing when new research cites papers in the user’s collection. This feature is useful for identifying emerging trends, staying informed about ongoing developments, and recognizing influential studies in the field.

  • Paper Discovery and Recommendations: Litmaps suggests related papers based on the user’s interests and current collections, making it easier to find relevant literature. The recommendation engine uses citation patterns and thematic connections to identify papers that might otherwise be overlooked in a traditional search.

  • Timeline View: Litmaps includes a timeline view, which organizes research papers chronologically. This view helps users track the development of ideas over time, see the progression of influential studies, and understand historical context within a research topic.

  • Customizable Maps: Users can create, organize, and customize their own literature maps to suit specific research projects or areas of interest. This feature allows researchers to tailor maps to include only the most relevant studies and organize them in a way that best supports their workflow.

  • Collaboration and Sharing: Litmaps enables researchers to share their literature maps with colleagues or collaborators, making it a useful tool for team projects and collaborative literature reviews. Shared maps facilitate group discussion, collective insights, and joint discovery of relevant literature.

Data Analysis using Generative AI Tools

In bioinformatics, where datasets can be vast and intricate, generative AI tools streamline the data analysis workflow by automating tasks such as statistical analysis, data cleaning, and result visualization. For example, tools like ChatGPT, which can analyze data, create visualizations, and run calculations, allow bioinformaticians to interact directly with data through natural language commands. This interactivity supports both exploratory and hypothesis-driven analysis, making it easier to derive meaningful insights without extensive coding.

The following sections will cover various generative AI tools used in data analysis, their features, and practical applications.

ChatGPT

ChatGPT, especially in its Advanced Data Analysis mode, is a versatile tool that allows users to interact with datasets using natural language prompts. This tool can perform tasks like statistical analysis, data cleaning, and visualization, enabling users to skip lengthy coding processes. For example, users can upload datasets directly into ChatGPT and ask it to generate summary statistics, perform regression analysis, or create plots based on the data. ChatGPT can also break down complex steps into manageable stages, making it an ideal tool for both beginners and experienced researchers.

Key Features:

  • Data Analysis and Visualization: Users can ask ChatGPT to analyze datasets and generate visualizations like histograms, scatter plots, box plots, and more. This supports data exploration, revealing patterns and trends within the data.
  • Statistical Analysis: With simple prompts, ChatGPT can conduct statistical tests, such as t-tests, ANOVA, and correlation analysis, producing interpretable results and explanations.
  • Data Cleaning and Preprocessing: ChatGPT assists with tasks like identifying missing values, normalizing data, and filtering outliers, providing a faster way to prepare datasets for analysis.
  • Natural Language Interaction: Researchers can use conversational prompts to guide the AI through specific analyses or calculations, making complex analyses accessible even to those without a coding background.

Julius AI

Julius AI is a specialized data analysis platform that leverages natural language processing to help researchers perform data analysis tasks without deep technical expertise. It allows users to enter data or upload datasets and then interact with Julius AI in a conversational manner to perform various analyses. Julius AI supports tasks like descriptive statistics, data transformation, hypothesis testing, and visualizations, making it a powerful tool for bioinformatics researchers handling complex datasets.

Key Features:

  • Natural Language Commands: Julius AI allows users to perform complex data analysis tasks simply by asking questions or giving commands in natural language, making it approachable for users without a strong background in coding.
  • Hypothesis Testing and Statistical Analysis: The tool supports common statistical analyses, such as chi-square tests, correlation analysis, and regression, and can explain the results in an easy-to-understand format.
  • Data Visualization: Julius AI generates charts and plots based on the user’s input, such as line charts, bar graphs, and heatmaps, allowing quick visualization of results without manual plotting.
  • Automated Data Insights: Julius AI can interpret patterns, identify correlations, and provide insights based on the data, helping researchers to draw meaningful conclusions more efficiently.

Hands-On Session: Analyzing the METABRIC Dataset

In the following hands-on session, we will use ChatGPT’s Advanced Data Analysis and Julius AI to analyze the METABRIC dataset—a comprehensive resource for breast cancer research.

Challenge

Task

Try the following tasks using both ChatGPT and Julius AI.

Use the dataset https://zenodo.org/record/6450144/files/metabric_clinical_and_expression_data.csv to generate the plot shown below.

Note: This figure is generated using R. You are required to use Python for generating a similar plot.

Credits and Acknowledgement

These content were adapted from the following materials: