Post-Session Resources
Everything you need to continue your AI journey
What's a Large Language Model (LLM)?
Picture a super-well-read assistant. Built on huge neural networks,software structures loosely inspired by brain neurons,an LLM has ingested millions of books, articles and webpages. As a result, it can:
- •Write and refine. Draft emails, marketing copy, long-form reports or press releases in any tone.
- •Digest documents. Summarise 100-page PDFs, compare contracts clause-by-clause or extract data tables.
- •Code on demand. Generate, explain and debug scripts in Python, SQL, JavaScript and more.
- •Tutor & brainstorm. Break down calculus proofs, role-play a language partner or spark product roadmaps.
- •Translate and localise. Across 100+ languages while preserving nuance.
- •Reason over data. Analyse spreadsheets, propose statistical tests, create draft charts.
- •Create media. With multimodal models, craft images, slide decks, voice-overs or short videos.
- •Act autonomously. Chain tools and APIs to research, schedule tasks or write reports while you focus elsewhere.
How does it work (in one minute)?
Pattern spotting
During training, the network sees billions of word sequences and learns which words are likely to follow which.
Next-word prediction
When you prompt it, the model keeps filling in words,constantly checking what it's written so far,until it completes the answer.
Scaling laws
Researchers found that throwing more data, parameters and compute (GPUs/TPUs) at the network makes its language skills leap, turning it from a pocket dictionary into a global library.
Why it matters to you
AI literacy is the next "digital literacy." LLMs will soon be woven into most work and consumer apps; experimenting now means you're ready when they're everywhere.
- •Productivity on tap. Offload routine writing, research and coding so you can tackle higher-value work.
- •Competitive edge. Colleagues who learn to delegate tasks to AI get more done; organisations that adopt it deliver faster and cheaper.
- •Creativity amplifier. Instant first drafts, idea generation and design mock-ups free you to iterate instead of staring at a blank page.
- •Democratised expertise. Need legal phrasing, statistical advice or marketing angles? An LLM gives you a "PhD in your pocket" whenever you ask.
- •Career resilience. Understanding how to direct (and critique) AI outputs positions you for roles that won't vanish but evolve.
Large Language Models (LLMs)
The AI landscape is filled with various tools for different purposes.
| Model | Lab | Open/Closed Source | What It's Good At | Context WindowContext window refers to the amount of text (measured in tokens) the model can process at once. Larger windows allow for longer documents and conversations. |
|---|---|---|---|---|
| Claude Sonnet 4.5 | Anthropic | Closed | Best coding model in the world, state-of-the-art SWE-bench performance, enhanced computer use capabilities, sustained 30+ hour focus | 200K |
| Claude Opus 4.5 | Anthropic | Closed | Enhanced reasoning, improved performance in finance/law/medicine/STEM, advanced long-running task execution | 200K |
| GPT-5.1 | OpenAI | Closed | Multimodal large language model designed as a unified system with enhanced reasoning, accuracy, and versatility | 400K |
| Grok-4.1 | xAI | Closed | Staying current with real-world events | 128K |
| Gemini 3.0 Pro | Google DeepMind | Closed | Advanced multimodal reasoning | 1M+ |
| DeepSeek R1 | DeepSeek | Open | Precision thinking on technical tasks | 131K |
| Kimi K2 | Moonshot AI | Open | Step-by-step reasoning, tool use, coding, mathematics, and agentic tasks with efficient MoE architecture | 256K |
| LLaMA 4 | Meta | Open | Advanced reasoning, multimodal tasks, customization | 128K |
| Mistral Large 2 | Mistral AI | Open | Strong open-weight model, privacy-focused, efficient performance | 128K |
| Qwen 2.5 | Alibaba | Open | Excellent multilingual support, competitive reasoning, strong coding | 128K |
| GPT-OSS-120B | OpenAI | Open | General purpose open-weight models, not fully open-source, as they do not include training data | 128K |
Ways to Engage
There's no one-size-fits-all approach to using Large Language Models. Depending on your goal , whether it's answering a question, automating a workflow, or doing deep research , there are different ways to get value. Here's a breakdown of the most common levels of engagement:
Prompting
Quick Answers, Drafts, and Ideas
The simplest way to use an LLM is to just ask it something. You give it a prompt, and it gives you a response. This is perfect for tasks like writing an email, summarizing an article, or brainstorming names for your new project. Think of this as your AI Assistant.
Reasoning Mode
Think With Me
Beyond basic prompting, modern models can reason through problems step-by-step. You can ask them to compare options, solve logic puzzles, or walk through a scenario in detail. Some models (like Claude or GPT-5) allow for more 'thinking time' , like having a smart tutor or analyst on hand.
Tool Use
Coding, Spreadsheets, and Software Help
Some LLMs can operate inside your tools , like writing code in your IDE (with tools like Cursor), editing docs, generating spreadsheets, or even using your terminal. These setups turn your model into a technical co-pilot, helping you build faster, fix bugs, and automate tricky workflows.
Deep Research
Ask the Web, Synthesise the World
If you're trying to understand a topic or explore something new, LLMs with web access (like Perplexity or Grok) can act as hyper-fast researchers. They search across sources, summarize what they find, and even provide citations. Great for decision-making, comparisons, and staying up to date.
Agents & Automation
Do It For Me
At the most advanced level, LLMs can run tasks on your behalf , not just responding, but taking initiative. Agents can book meetings, update documents, scrape websites, or even manage recurring workflows. These setups feel like having a junior team member who works around the clock.
Fine-Tuning & Post-Training
Bespoke LLMs
For specialized, high-stakes tasks, a general-purpose model isn't enough. Fine-tuning allows you to retrain a base model on your own private data,like customer tickets, legal documents, or brand guidelines. This creates a bespoke 'expert' model that understands your specific domain.
Glossary
Key terms and definitions to help you understand the AI landscape.
| Term | Definition |
|---|---|
| Artificial Intelligence (AI) | A branch of computer science focused on creating systems capable of performing tasks that typically require human intelligence, such as learning, reasoning, problem-solving, and understanding language. |
| Large Language Model (LLM) | An advanced AI system trained on vast amounts of text data to understand and generate human-like text. LLMs predict the next word in a sequence based on context, enabling them to perform tasks like answering questions, summarizing information, and writing code. |
| Generative AI (GenAI) | A subset of AI focused on creating new content, such as text, images, audio, or video, based on patterns learned from training data. Examples include ChatGPT for text and DALL-E for images. |
| Prompt Engineering | The practice of crafting effective prompts to guide LLMs in generating accurate and relevant responses. It involves structuring inputs to maximize the model's performance for specific tasks. |
| Context Window | The amount of text (in tokens) that an LLM can process at once. It includes both the input prompt and the model's previous responses in a conversation. |
| Token | A unit of text (e.g., a word, part of a word, or punctuation) used by LLMs to process and generate language. The context window of an LLM is measured in tokens. |
| Neural Network | A computing system inspired by biological neural networks, consisting of interconnected nodes (neurons) that process and transmit information. The foundation of modern AI systems including LLMs. |
| Training Data | The collection of text, images, or other content used to teach an AI model patterns and relationships. The quality and quantity of training data significantly impact model performance. |
| Fine-tuning | The process of further training a pre-trained model on specific data to improve its performance on particular tasks or domains. This allows models to specialize in certain areas while maintaining their general capabilities. |
| Transformer Architecture | A neural network design that revolutionized natural language processing by enabling models to process all words in a sequence simultaneously while maintaining awareness of their relationships and context. |
| Inference | The process of an AI model generating responses or making predictions based on input. The time and computational resources required for inference can vary significantly between models. |
| Model Parameters | The variables within a neural network that are adjusted during training to capture patterns in data. More parameters generally allow for more complex understanding but require more computational resources. |
| AI Agents | Autonomous systems that can perceive their environment, make decisions, and take actions to achieve specific goals. AI agents can range from simple chatbots to complex systems that can browse the web, use tools, and execute multi-step tasks independently. |