Here's your interactive graph! It shows the full flow with animated data packets moving between each component:
YOU give a goal → AGENT orchestrates everything → MODEL does the thinking → SKILLS guide the methodology → TOOLS execute real attacks → TARGET
The spinning dashed ring around the AGENT represents the autonomous loop it runs until the task is done.
The AI brain — it's the thing that thinks and generates text. It doesn't do anything on its own, it just reasons and responds.
Example: llama3, mistral, GPT-4, Claude — these are all models. Like different brands of brain.
A model wrapped with the ability to take actions — it can decide what to do next, call tools, and loop until the task is done. It's the model + a goal + a loop.
Example: You tell PentestGPT "hack this IP" — it becomes an agent because it doesn't just answer you, it runs nmap, reads the output, decides what to try next, runs metasploit, and keeps going autonomously.
Functions the agent can call to interact with the real world. Without tools, a model is just chatting. Tools let it do things.
Examples:
run_nmap(target) — scans ports
search_web(query) — googles something
run_metasploit(exploit) — fires an exploit
read_file(path) — reads a file on disk
Pre-written instructions or playbooks that tell the agent how to approach a specific task. Think of them like SOPs (standard operating procedures).
Example: A "SQL Injection skill" might tell the agent: first try sqlmap, if that fails try manual payloads, then document findings this way...
You give a goal to the AGENT
↓
AGENT uses the MODEL to think
↓
MODEL reads SKILLS to know best practices
↓
AGENT calls TOOLS to take real actions
↓
Loop until goal is complete
Real world example in PentestGPT:
Model = llama3 (does the thinking)
Agent = PentestGPT (runs the loop)
Tools = nmap, metasploit, sqlmap (takes real actions)
Skills = pentesting methodology built into the prompts (knows to do recon before exploitation)