LLM typically refers to Large Language Model in the context of artificial intelligence and machine learning.It is a type of deep learning model trained on massive amounts of text data to understand and generate human-like language. Examples: GPT-4, Claude, Gemini, LLaMA, etc.
Transformer is a deep learning architecture introduced by Vaswani et al. in the 2017 paper Attention Is All You Need. It revolutionized how models process sequential data like text β and forms the backbone of models like GPT, BERT, Claude, Gemini, and more. A Transformer uses self-attention to weigh the importance of different words in a sequence β allowing it to understand context, even over long distances.
Prompt Engineering π£οΈβοΈ β π§ π¬ is the practice of crafting effective inputs (prompts) to guide and optimize the behavior of large language models (LLMs) like GPT-4, Claude, or Gemini to get accurate, useful, or creative outputs.
RAG ππ β π§ = π§ π¬ stands for Retrieval-Augmented Generation β a powerful technique that enhances large language models (LLMs) by giving them access to external information during inference time. LLMs like GPT or Claude are limited to the data they were trained on. RAG solves this by combining: (1) Retrieval: Pulling relevant documents or facts from an external knowledge base (e.g., database, website, PDFs). (2) Augmented Generation: Feeding those retrieved results into the LLM so it can generate more accurate and up-to-date responses.
Fine-tuning π β π§ π οΈ = π― is the process of taking a pre-trained large language model (LLM) (like GPT, LLaMA, or Claude) and training it further on custom, domain-specific data to specialize its behavior.
AI Agent π€π§ is an intelligent system β often powered by a Large Language Model (LLM) β that can autonomously perceive, reason, and take actions to accomplish a goal, often by interacting with tools, environments, or users.
LangChain π£οΈ β ππ§ β π§ β π¬ = AI brain (logic, reasoning, LLM interaction) is an open-source framework that helps developers build LLM-powered applications β especially ones that go beyond simple prompts by enabling reasoning, memory, tool use, and multi-step workflows.
Multi-modal π§ + πΌοΈ + π + π refers to AI systems that can process and understand multiple types of data (modalities) β such as text, images, audio, video, or structured data β at the same time or in combination.
AGI π§ π€π (Artificial General Intelligence) is an AI system with the ability to understand, learn, and apply knowledge across a wide range of tasks β just like a human.
AI Tools
Cherry Studio is a powerful, openβsource desktop AI client designed for Windows, macOS, and Linux. It integrates multiple large language models (LLMs)βboth cloud-based (like OpenAI, Gemini, Anthropic) and local models via backends such as Ollama or LM Studioβwhich lets you easily switch between them in conversation.
π Website
π Docs
π GitHub
n8n = Automation backbone (triggering, routing, integration) is an open-source AI workflow automation tool that lets you connect various services (APIs, databases, webhooks, etc.) and automate tasks without writing much codeβthough coding is also supported for flexibility.
π Website
π Docs
π GitHub
β‘οΈ Workflows
Back to top of the page