GITERROR

  • Field & Reference Notes
  • Projects

February 2026

Artificial intelligence (AI) Development GenAI Tools

Claude Code — The Practical Guide to Agentic Coding

Claude Code represents the next evolution of AI coding tools — moving beyond autocomplete and chat assistants toward true engineering collaboration. This guide explains how Claude Code differs from tools like Cursor and Copilot, how context and memory actually work, and how to control AI behavior through commands, integrations, and workflow design.

By michal, 5 daysFebruary 28, 2026 ago
Artificial intelligence (AI)

AI Fluency Explained: From Simple Prompts to Autonomous Agents

AI fluency isn’t about better prompts. It’s about understanding how to work with AI — when to automate tasks, collaborate with models, or deploy autonomous agents. A practical breakdown of Anthropic’s engagement model and the 4D framework for real-world AI workflows.

By michal, 2 weeksFebruary 21, 2026 ago
Artificial intelligence (AI) LLM

How to Actually Abuse LLMs

Understanding an LLM’s edges matters more than flattering it.
This post explores how deliberately pushing models into conflicting, noisy, or chained prompts reveals their failure modes — and why learning those edges is the core of real prompt engineering.

By michal, 3 weeksFebruary 14, 2026 ago
Artificial intelligence (AI) LLM

Why LLMs Hallucinate — Patterns, Pitfalls, and How to Guard Against It

Hallucination isn’t a mystery glitch — it’s a predictable consequence of how LLMs work. This Field Note breaks down why models fabricate information, the patterns behind it, and practical ways to reason about and mitigate hallucination in real systems.

By michal, 4 weeksFebruary 7, 2026 ago
Artificial intelligence (AI) LLM RAG

How to Build Reliable LLM Pipelines — Grounding, Verification, and Resilience

LLM pipelines aren’t just prompts and models — they’re grounded, verified, cached, and resilient systems. This Field Note breaks down pragmatic patterns like retrieval grounding, verification checks, caching strategies, and reliability practices that keep AI systems dependable beyond demos.

By michal, 1 monthFebruary 1, 2026 ago
Recent Posts
  • Claude Code — The Practical Guide to Agentic Coding
  • AI Fluency Explained: From Simple Prompts to Autonomous Agents
  • How to Actually Abuse LLMs
  • Why LLMs Hallucinate — Patterns, Pitfalls, and How to Guard Against It
  • How to Build Reliable LLM Pipelines — Grounding, Verification, and Resilience
Archives
  • February 2026
  • October 2025
  • May 2025
  • January 2025
  • December 2024
  • November 2024
  • August 2024
  • February 2022
  • January 2022
  • November 2021
  • October 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • March 2020
  • February 2020
Categories
  • .NET
  • Architecture
  • Artificial intelligence (AI)
  • AWS
  • C#
  • C++
  • Certification
  • Cloud
  • Data
  • Data Structures & Algorithms
  • Design Patterns
  • Development
  • DevOps
  • Disaster Recovery (DR)
  • Express
  • Frameworks
  • Fundamentals
  • GenAI
  • High-Availability (HA)
  • IaC
  • JavaScript
  • LLM
  • Programming Languages
  • Python
  • RAG
  • Scala
  • Scalability
  • Terminal
  • Tools
Meta
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
Hestia | Developed by ThemeIsle