Skip to Main Content

Generative Artificial Intelligence: Introduction to Generative AI

Resources on Generative AI

Welcome to Generative AI

Welcome to the NCC Libraries' Research Guide on Generative AI, designed to support students, faculty, and staff in exploring this rapidly evolving field. Generative AI refers to artificial intelligence systems capable of creating new content, such as text, images, music, or code, based on patterns learned from existing data.  This guide provides an overview of key concepts, tools, ethical considerations, and resources to help you understand and responsibly engage with generative AI.

 

**This research guide was created with the assistance of ChatGPT and Google Gemini.**

Research Help

Visit us

 

View upcoming library hours

 

Chat

 

Outside of posted hours, our Ask the Librarian chat is staffed by Visiting Librarians from our after-hours service.

 

Email

askthelibrarian@northampton.edu
 

Call

(610) 861-5359

 

Book a Librarian appointments

Request a Book a Librarian research consultation to meet or video chat on a day and time convenient to your schedule.

Librarian

What is Generative AI?

 

Screenshot of Chat GPT interface with "What is Generative AI?" as the prompt.

 

Generative AI is a type of artificial intelligence that can create new content like writing, images, music, or even computer code by learning from existing examples. Rather than simply copying what it has seen, generative AI identifies patterns and structures in the data and uses them to generate original outputs or content. It functions as a tool that can assist with a variety of tasks like brainstorming topics, summarizing concepts, design issues, problem-solving, and idea generation.


The content AI tools produce can take many forms, including written essays, computer code, musical pieces, or visual art all generated based on instructions, or "prompts," provided by a user. Unlike traditional internet search engines that locate and present existing information, AI generates new content by guessing what should come next, whether it’s the next word in a sentence, a pixel in an image, or a note in a song. It does this by recognizing and mimicking patterns learned from a huge amount of data. That’s how it can come up with such a wide range of content that often feels surprisingly human. But because it’s predicting rather than truly understanding, it’s important to look at its results with a critical eye.

 

**While these tools can be helpful, it’s important to use them ethically, always verifying information and ensuring your work maintains academic integrity.**

How does Generative AI work?



Generative AI works by using models that learn patterns from data or prompts to create new content or output.

Large Language Models (LLMs) refer to artificial intelligence tools like ChatGPT, Google Gemini, and Claude. Large Language Models rely on text prompts from the user to produce output. They are trained on huge amounts of text from books, websites, and other sources to learn how language works and then analyze patterns in the data to predict what words are likely to come next. This training allows them to generate text, answer questions, or have conversations.

Diffusion Models are a type of AI that create images based on a text prompt. They begin with a random pattern of pixels (pure noise) and gradually erase extra elements until a clear image is made that matches the prompt they were given. Using training data, the model gradually learns how to produce realistic images. Popular AI tools that use diffusion models include the image generators Midjourney, DALL-E, and Adobe Firefly.  

**Large Language Models and Diffusion Models have become the most well known types of generative AI due to the large scale adoption of the tools mentioned above such as ChatGPT  and DALL-E.** 

 

Large Language Models and Diffusion Models have some important things in common. Both are based on computer programs called neural networks and both use large amounts of training data to produce output (text or images). 

 Neural Networks are computer programs inspired by the way the human brain works. They process information through layers of connected "nodes" that learn to recognize patterns in data. In LLMs, neural networks learn the patterns and structure of language to predict and generate text. In diffusion models, neural networks learn how to gradually turn random pixels into images by recognizing patterns in visual data.

Training data is the information used to teach an AI model how to perform a task. For LLMs, training data includes massive collections of text (books, articles, and websites etc.) which help the model learn grammar, facts, and how language works. For diffusion models, training data consists of millions of images, often including with captions or descriptions, so the model can learn how to generate images that match specific prompts.