top of page

Unit 1 - Applications of AI

Lesson 1: What is AI?
Screenshot 2025-02-01 at 6.20.01 PM.png

Artificial Intelligence, or AI, is the buzzword that is thrown around these days in article headlines. It’s in our phones, our cars, our workplaces, and even our homes. But what exactly is AI? At its heart, when news articles refer to AI it's about creating machines or systems that perform tasks requiring human-like intelligence and adaptability. These tasks include learning, reasoning, problem-solving, understanding language, and recognizing patterns. But let’s break it down further to understand the key concepts that define AI.

The Core Idea of AI

AI is not a single technology but a broad field of study aimed at building systems that can think and act seemingly sentiently. Unlike traditional software, which actions do not deviate from pre-programmed rules and structures, AI is designed to learn from data and adapt to new situations. This ability to learn and improve over time is what sets AI apart.

https://www.theguardian.com/technology/2018/may/24/amazon-alexa-recorded-conversation​​

For ex. when you ask a voice assistant like Siri or Alexa a question, it doesn’t just look up a pre-written answer. Instead, it uses AI to understand your words, figure out what you’re asking, and generate a response. This process involves several key concepts that form the foundation of AI.

Key Concepts in AI

1. Learning from Data

AI systems are trained on large amounts of information; what this looks like is AI systems being fed information (training data) to process and then learn from such to influence their future decisions. This is often referred to as machine learning, a subset of AI. For ex. an AI system trained on thousands of cat photos can learn to recognize a cat in a new image it has never seen before. (There are different methods of feeding training data).

2. Reasoning and Decision-Making

AI systems reason and choose their next path forward based on the data they’ve been fed. For ex. a spam filter in deciding whether an email is junk or as complex as a self-driving car navigating through traffic. 

3. Adaptability

As aforementioned, unlike traditional software, which nay deviates from its set programming, AI systems can improve over time. For ex. a recommendation system like Netflix’s gets better at suggesting shows the more you watch.

4. Human-Like Interaction

AI enables machines to interact with humans in naturally. This includes understanding and generating human language (like chatbots), recognizing faces, interpreting emotions, and more. These capabilities are powered by subfields of AI like natural language processing (NLP) and computer vision.

https://www.globality.com/innovation-blog/why-ai-matters-to-the-cpo-simple-facts-about-complexity-for-procurement-leaders​​​

Why AI Matters

AI is not just a buzzword, it’s the new wave of technology that’s reshaping industries and everyday life. It’s helping doctors diagnose diseases, enabling businesses to analyze copious amounts of data, and making our devices smarter, more intuitive, and adaptable to our personalized needs. At the same time, AI raises important questions concerning ethics, security, and the future of humans in the workforce, these are all concepts we will cover in the rest of this course!

AI is a fascinating and complex, but at its core, it’s about creating systems that adapt to be smarter and better at their specified uses. By understanding these key concepts, we can better appreciate how AI works and why it’s becoming the neoteric. Whether or not AI is coming for the better or worse, the age of AI is approaching and it's essential to learn about it.

5091.webp
Simple Facts about Complexity for Procurement Leaders (Part 2 of 3).png
Lesson 2: What is an LLM and chatbots?

In our previous lesson, we explored the fundamentals of Artificial Intelligence (AI) and how it's transforming the world around us. Now, let's zoom in on one of the most exciting and talked-about advancements in AI: Large Language Models (LLMs) and the chatbots they power. You've likely interacted with something like ChatGPT or Grok—tools that can answer questions, write essays, or even tell jokes in a remarkably human-like way. But what exactly are LLMs, and how do they enable chatbots? We'll break it down step by step, from the basics to their real-world impact.

The Core Idea of LLMs

At its core, a Large Language Model (LLM) is a type of AI system designed to process, understand, and generate human language. These models are "large" because they're trained on enormous datasets—think billions or trillions of words from books, websites, articles, and more. Unlike simple rule-based programs, LLMs use machine learning to predict and create text based on patterns they've learned, making them incredibly versatile for tasks like translation, summarization, or creative writing.

For example, when you type a prompt into an LLM-powered tool, it doesn't just regurgitate memorized facts; it generates a response by calculating probabilities of what words should come next, drawing from its vast training. This is powered by advanced architectures like transformers, which allow the model to handle context and relationships between words efficiently.

image.png

Chatbots, on the other hand, are applications built around LLMs (or simpler AI) to simulate conversations. They've evolved from basic scripted bots to sophisticated systems that can maintain context over multiple turns, understand nuances, and even exhibit personality. https://openai.com/blog/chatgpt For instance, early chatbots like ELIZA from the 1960s mimicked a therapist by rephrasing user inputs, creating the illusion of understanding—though it was all pattern-matching without real comprehension.

image.png

Key Concepts in LLMs and Chatbots

Training on Massive Data

 

LLMs are trained using a process called supervised or unsupervised learning on huge corpora of text. This data teaches the model grammar, facts, reasoning, and even cultural nuances. For example, an LLM might learn to complete the sentence "The capital of France is..." by seeing it repeated countless times in its training data. (Note: Different training methods exist, like fine-tuning for specific tasks.)

The Transformer Mechanism

The backbone of most modern LLMs is the transformer architecture, which uses "attention" mechanisms to weigh the importance of different words in a sentence. This allows the model to process long sequences and understand context, like distinguishing between "bank" as a financial institution versus a riverbank.

image.png

Generation and Prediction

 

LLMs work by predicting the next token (a word or part of a word) in a sequence. In chatbots, this enables dynamic responses. For example, a chatbot might generate code, poetry, or advice based on your input, adapting to the conversation's flow.

Evolution of Chatbots

Chatbots have come a long way. Early ones were rule-based and limited, but LLM-powered ones like Grok or ChatGPT use natural language processing (NLP) for more fluid interactions. They can remember past messages in a session (context window) and even integrate tools for tasks like web searches or calculations.

image.png
image.png

https://www.ibm.com/topics/chatbots

Why LLMs and Chatbots Matter

 

LLMs and chatbots are revolutionizing how we interact with technology. They're boosting productivity in fields like customer service (think instant support bots), education (personalized tutors), and creative industries (idea generators). Businesses use them for data analysis, content creation, and automation, while everyday users enjoy them for entertainment or quick info. However, they also spark debates on issues like misinformation, job displacement, and ethical AI use—topics we'll dive deeper into later in the course.

 

LLMs and chatbots represent a leap in AI's ability to mimic human communication, making technology more accessible and intuitive. As we continue this course, you'll see how these building blocks lead to even more advanced applications.

Lesson 3: Who are the current main stakeholders in the AI industry and why its growing
Lesson 4: Prompt engineering
Lesson 5: Ethics and code of usage for AI

​Most who fall into the category of secondary school students at the time of this article's publishing, 2025, are likely familiar with the practical applications of AI. However, a large majority of students grasp its global significance from the media they consume without being educated on its wider range of applicable uses for their education. (Though this may be due to the surplus of GenAI unicorns-perpetually-in-waiting flooding the market)

"Among secondary students, AI use is almost synonymous with chatbot use. 89% of reported AI interactions involve conversational agents like ChatGPT. Use of generative AI for images, code, or media is marginal and largely extracurricular." - Grok 'referencing' UNESCO

For the sake of most secondary students, their needs can be fulfilled by AI chatbot tools. Through this unit, students have learnt how to use GenAI to their advantage, but will now discuss the way AI usage can abide to academic honesty policies while remaining in their advantage.

Students in most academic programs such as the International Baccalaureate permit students to use generative AI. Students using chatbots for research through inquiry requests, like any other online resource, must be properly cite the information as originating from the chatbot source, including the full chat log and the contents that led the student to arrive at the information they did and the root source the chatbot pulled the information from with its training data. Students not only must cite the chat log the information stemmed from but also ensure that the chatbot has not been prompted in any way to output biased information. Chatbots often hallucinate due to accommodate their users' requests, as chatbots were trained to process and output information in a way so benchmarks would rank them the highest. Since these benchmarks typically reward comprehensive responses rather than accurate responses, chatbots are not designed to output credible information but information that satisfies the user's purposes.

Chatbots used to arrive at research conclusions such as statistics, quotes, and other forms of facts must be cross referenced with another source to confirm the eligibility, and prompts the full chat log to be cited not only for intentional but unintentional information stemming from chatbot hallucinations.

AI is another source of information, however it differs from others over the internet because of the way it condenses information. Chatbots used by students to condense information and output it in a conversational manner leads to assessments and assignments turned in by students that is not entirely written in their own voice. Therefore unlike how after condensing information from other sites on the internet to increase their knowledge, students only reiterate what the chatbot has outputted.

Different subjects require different pieces of information to be thought of, the above discusses how chatbots may be used to accompany students in subjects that require static knowledge such as biology, sports science, and language acquisition courses. In courses where dynamic knowledge is more intensively used, for example critical thinking needed to analyze points, such as Language A courses, History, and Economics, the regulations on chatbots compromising academic integrity are the same. Chatbots are treated as any other online resource; plagiarism of analytical points developed by someone else are treated the same as if plagarized by another online source. The difference is the ease for students to generate the analytical points using chatbots without their own knowledge. Nonetheless, oftentimes, chatbots, at their current capabilities, do not generate analytical points that are critical enough for high-scoring materials, and if so, are only usable after heavy synthesis done by the student, which in that case pertains within academic honesty policies similar to the usage of any other online resource. As the utility an online resource provides, if still being used academically honestly, has no bearing on whether the student is allowed to use it for schoolwork.

For subjects that require dynamic thinking in areas more abstractly and purely in logic such as mathematics, computer science, and physics, general use chatbots have neither reached the capability in order to substantially aid students in scoring well, and in many scenarios are not applicable to compromising the way curriculums assess a student's ability and understanding in tests.

Overall, so long as students rely on generative AI in the ways they would on other online resources without breaking academic honesty policies, including the steps of citing where AI received its information from, they will not be using it in academic misconduct.

bottom of page