agents vs agentic systems
i've been pondering an analogy recently thats helped me think about the difference between ai agents and what we've been calling agentic systems.
traditionally, ai “agents” are built as a boss (LLM) giving direct orders, able to execute a handful of predefined decisions (tools / executable functions). the boss might be able to search google, read a webpage, and maybe save their findings to a journal. but that’s it.
as LLMs get more and more powerful, constraints seem to matter less and less. in fact, we want to let them do more. i’ve found it much more useful - and intuitive - to visualize an “agentic system.”
i start by thinking about a conductor at the helm of a train traveling through unexplored terrain, with no tracks laid ahead. the conductor looks out front, staring into an open landscape with no predetermined path.
at this point, the conductor calls the engineers to put down the next track so the train can move forward. they decide whether it will be a straight track, a bendy one, or one made out of steel or wood.
but, maybe the conductor spots that the terrain ahead requires climbing a steep hill - something that this train has never faced. so, they pause, consulting their manual to invent a new kind of track that can safely climb the hill.
once the track is laid, the the train moves forward. did it climb the hill? yes, fantastic! but was the climb optimal, efficient, repeatable? maybe there’s room for improvement. the conductor iterates until the train smoothly navigates the path. move forward, and repeat.
eventually, the path from start to destination is established, tested, and refined. future trains can easily follow the track, reliably repeating the journey. though, later conductors may have to extend or adapt the route, handling new edge cases or conditions.
this is what we’ve been calling an agentic system - leveraging the generative, logical, and random nature of an LLM to enumerate repeatable pipelines from point A to point B. AKA, using an LLM to generate structured workflows (a control flow, like code) to accomplish previously unknown tasks.
its a melding of the neural (creative, black box) and symbolic (interpretable, repeatable) to expand the scope of work that we consider automatable
thinking abt something similar? i’d love to talk! email me