AI, or Artificial Intelligence, is technology developed with the goal of performing tasks typically requiring human intelligence, such as recognizing patterns, making decisions, and communicating. AI can refer to a vast array of technologies, but most AI mimics human cognitive abilities by creating a model of the world based on a limited set of information provided to it, and then guessing how the world it has modeled would behave. Generative AI can even generate content that it would expect to exist in that world, in response to prompts from humans.  

Because of the many, often hidden layers of calculations and decisions that lead to the predictions AI systems make, AI is often referred to as a “black box.” While the innermost workings of some AI tools remain unknown – even to their creators – it is important for AI users to understand the foundations of the tools they may use to guide their research and scholarship. The following 5 definitions aim to demystify the building blocks of Generative AI technology. 

  1. Algorithm: The foundation of all AI tools are algorithms. An algorithm is a series of steps that allows a computer to accomplish a certain task. A simple example is a set of instructions for sorting a list of numbers. One common algorithm for accomplishing this task is the Bubble Sort algorithm, which tells the computer to start at the beginning of a list, compare each number to the next and swap them if they are in the incorrect order. This is a simple example, but algorithms can become as complex as instructions for predicting who you might want to follow on a social media platform, or predicting what word will come next in a sentence. 
  1. Machine Learning: In the number sorting example above, a human provided the instructions for the Bubble Sort algorithm to the computer based on what mathematicians know is an efficient approach.  But what if the computer could figure out the best way to sort a list of numbers by itself, by evaluating how closely the results of different approaches match a correctly sorted list? Machine learning is the branch of artificial intelligence that teaches computers how to find rules and patterns in a large amount of information in order to make predictions about previously unseen information. 
  1. Training data: The information used to develop machine learning models is called training data. For example, the machine learning model behind Google Image Search has been shown a vast set of images labeled by humans, which enables it to determine which images contain the terms you searched for. The training data in this example was labeled with the “right” answer by humans, but other types of models use calculations to map out the inherent structure of their training data without human input. 
  1. Large Language Model: One example of a machine learning model that uses this approach is a large language model, or LLM. LLMs are fed a vast amount of human-written text, often sourced from the internet, which they analyze to map out the probabilities underlying human language. LLMs use these models of likelihood to mimic human communication, and can accomplish language-related tasks like text summarization, language translation, and question answering. Large language models form the foundation for Generative AI tools like Miscrosoft’s CoPilot, OpenAI’s GPTs, Meta’s LLaMA, xAI’s Grok, and Google’s Gemini.  
  1. Deep learning: Large language models have become so sophisticated by utilizing deep learning, an approach to machine learning in which a model calibrates itself to give more weight to computational tasks that help it perform best by comparing many (sometimes hundreds of) different approaches to making predictions. This technique is known as an “artificial neural network” because it mimics how the human brain processes information, and requires large amounts of data and computational power.  

Chatbot

User interfaces for interacting with generative AI tools that generate human-like text in response to prompts from humans. 

Examples: ChatGPT, Microsoft CoPilot, Gemini, DeepSeek, Perplexity, Grok, Claude, Hopkins AI Lab

Large Language Model

Machine learning models that are designed for natural language processing, particularly language generation. All chatbots are powered by large language models. Some LLMs are made openly available, and can be customized and deployed by users. LLMs are also used in many other applications. 

Examples: GPT-4, Llama 4, Claude 3, Gemini 3, DeepSeek-R1, Grok 3 

Generative AI

A general term referring to AI tools and technologies capable of generating human-like content in response to prompts, including text, images, music, or speech. 

Where to Get Help with Generative AI at Hopkins