logoalt Hacker News

The new skill in AI is not prompting, it's context engineering

656 pointsby robotswantdatayesterday at 8:53 PM349 commentsview on HN

Comments

labradoryesterday at 9:55 PM

I’m curious how this applies to systems like ChatGPT, which now have two kinds of memory: user-configurable memory (a list of facts or preferences) and an opaque chat history memory. If context is the core unit of interaction, it seems important to give users more control or at least visibility into both.

I know context engineering is critical for agents, but I wonder if it's also useful for shaping personality and improving overall relatability? I'm curious if anyone else has thought about that.

show 1 reply
grafmaxyesterday at 9:53 PM

There is no need to develop this ‘skill’. This can all be automated as a preprocessing step before the main request runs. Then you can have agents with infinite context, etc.

show 1 reply
Havoctoday at 11:33 AM

Honestly this whole "context engineering" trend/phrase feels like something a Thought Leader on Linkedin came up with. With a sprinkling of crypto bro vibes on top.

Sure it matters on a technical level - as always garbage in garbage out holds true - but I can't take this "the art of the" stuff seriously.

saejoxyesterday at 9:43 PM

Claude 3.5 was released 1 year ago. Current LLMs are not much better at coding than it. Sure they are more shiny and well polished, but not much better at all. I think it is time to curb our enthusiasm.

I almost always rewrite AI written functions in my code a few weeks later. Doesn't matter they have more context or better context, they still fail to write code easily understandable by humans.

show 1 reply
Mikejamestoday at 8:20 AM

anyone spinning up their own agents at work? internal tools, what’s your stack? workflow? I’m new to this stuff but been writing software for years

joe5150today at 12:02 AM

Surely Jim is also using an agent. Jim can't be worth having a quick sync with if he's not using his own agent! So then why are these two agents emailing each other back and forth using bizarre, terse office jargon?

geeewhyyesterday at 10:30 PM

ive beeen experimenting with this for a while, (im sure in a way, most of us did). Would be good to numerate some examples. When it comes to coding, here's a few:

- compile scripts that can grep / compile list of your relevant files as files of interest

- make temp symlinks in relevant repos to each other for documentation generation, pass each documentation collected from respective repos to to enable cross-repo ops to be performed atomically

- build scripts to copy schemas, db ddls, dtos, example records, api specs, contracts (still works better than MCP in most cases)

I found these steps not only help better output but also reduces cost greatly avoiding some "reasoning" hops. I'm sure practice can extend beyond coding.

b0a04gltoday at 4:57 AM

imo i feel it's just reinventing database design principles but for llms. normalisation denormalisation indexing retrieval. same concepts different target. its more of just being good at organising information which is what we should have been doing all along.

patrickhogan1yesterday at 10:07 PM

OpenAI’s o3 searches the web behind a curtain: you get a few source links and a fuzzy reasoning trace, but never the full chunk of text it actually pulled in. Without that raw context, it’s impossible to audit what really shaped the answer.

show 1 reply
pwarneryesterday at 9:29 PM

It's an integration adventure. This is why much AI is failing in the enterprise. MS Copilot is moderately interesting for data in MS Office, but forget about it accessing 90% of your data that's in other systems.

adhamsalamayesterday at 9:50 PM

There is no engineering involved in using AI. It's insulting to call begging an LLM "engineering".

show 1 reply
hnthrow90348765yesterday at 9:58 PM

Cool, but wait another year or two and context engineering will be obsolete as well. It still feels like tinkering with the machine, which is what AI is (supposed to be) moving us away from.

show 1 reply
ModernMechyesterday at 9:32 PM

"Wow, AI will replace programming languages by allowing us to code in natural language!"

"Actually, you need to engineer the prompt to be very precise about what you want to AI to do."

"Actually, you also need to add in a bunch of "context" so it can disambiguate your intent."

"Actually English isn't a good way to express intent and requirements, so we have introduced protocols to structure your prompt, and various keywords to bring attention to specific phrases."

"Actually, these meta languages could use some more features and syntax so that we can better express intent and requirements without ambiguity."

"Actually... wait we just reinvented the idea of a programming language."

show 4 replies
almostheretoday at 2:40 AM

Which is prompt engineering, since you just ask the LLM for a good context for the next prompt.

walterfreedomyesterday at 11:53 PM

I am mostly focusing in this issue during the development of my agent engine (mostly for game npcs). Its really important to manage the context and not bloat the llm with irrelevant stuff for both quality and inference speed. I wrote about it here if anyone is interested: https://walterfreedom.com/post.html?id=ai-context-management

show 1 reply
alganetyesterday at 9:58 PM

If I need to do all this work (gather data, organize it, prepare it, etc), there are other AI solutions I might decide to use instead of an LLM.

show 2 replies
asciiiyesterday at 11:59 PM

Here I was thinking that part of Prompt Engineering is understanding context and awareness for other yada yada.

whimsicalismyesterday at 9:35 PM

i think context engineering as described is somewhat a subset of ‘environment engineering.’ the gold-standard is when an outcome reached with tools can be verified as correct and hillclimbed with RL. most of the engineering effort is from building the environment and verifier while the nuts and bolts of grpo/ppo training and open-weight tool-using models are commodities.

bag_boyyesterday at 11:01 PM

Anecdotally, I’ve found that chatting with Claude about a subject for a bit — coming to an understanding together, then tasking it — produces much better results than starting with an immediate ask.

I’ll usually spend a few minutes going back and forth before making a request.

For some reason, it just feels like this doesn't work as well with ChatGPT or Gemini. It might be my overuse of o3? The latency can wreck the vibe of a conversation.

stillpointlabyesterday at 11:11 PM

I've been using the term context engineering for a few months now, I am very happy to see this gain traction.

This new stillpointlab hacker news account is based on the company name I chose to pursue my Context as a Service idea. My belief is that context is going to be the key differentiator in the future. The shortest description I can give to explain Context as a Service (CaaS) is "ETL for AI".

davidclarkyesterday at 9:46 PM

Good example of why I have been totally ignoring people who beat the drum of needing to develop the skills of interacting with models. “Learn to prompt” is already dead? Of course, the true believers will just call this an evolution of prompting or some such goalpost moving.

Personally, my goalpost still hasn’t moved: I’ll invest in using AI when we are past this grand debate about its usefulness. The utility of a calculator is self-evident. The utility of an LLM requires 30k words of explanation and nuanced caveats. I just can’t even be bothered to read the sales pitch anymore.

show 1 reply
bradheyesterday at 10:02 PM

Back in my day we just called this "knowing what to google" but alright, guys.

retinarosyesterday at 10:15 PM

it is still sending a string of chars and hoping the model outputs something relevant. let’s not do like finance and permanently obfuscate really simple stuff to make us bigger than we are.

prompt engineering/context engineering : stringbuilder

Retrieval augmented generation: search+ adding strings to main string

test time compute: running multiple generation and choosing the best

agents: for loop and some ifs

aaronlinoopstoday at 4:02 AM

As models become more powerful, the ability to communicate effectively with them becomes increasingly important, which is why maintaining context is crucial for better utilizing the model's capabilities.

dborehamtoday at 2:28 AM

The dudes who ran the Oracle of Delphi must have had this problem too.

grumpletoday at 10:30 AM

After a recent conversation here, I spent a few weeks using agents.

These agents are just as disappointing as what we had before. Except now I waste more time getting bad results, though I’m really impressed by how these agents manage to fuck things up.

My new way of using them is to just go back to writing all the code myself. It’s less of a headache.

show 1 reply
drmathyesterday at 10:59 PM

Isn't "context" just another word for "prompt?" Techniques have become more complex, but they're still just techniques for assembling the token sequences we feed to the transformer.

show 1 reply
ameliusyesterday at 10:26 PM

Yes, and it is a soft skill.

jongjongyesterday at 10:06 PM

Recently I started work on a new project and I 'vibe coded' a test case for a complex OAuth token expiry bug entirely with AI (with Cursor), complete with mocks and stubs... And it was on someone else's project. I had no prior familiarity with the code.

That's when I understood that vibe coding is real and context is the biggest hurdle.

That said, most of the context could not be pulled from the codebase directly but came from me after asking the AI to check/confirm certain things that I suspected could be the problem.

I think vibe coding can be very powerful in the hands of a senior developer because if you're the kind of person who can clearly explain their intuitions with words, it's exactly the missing piece that the AI needs to solve the problem... And you still need to do code review aspect which is also something which senior devs are generally good at. Sometimes it makes mistakes/incorrect assumptions.

I'm feeling positive about LLMs. I was always complaining about other people's ugly code before... I HATE over-modularized, poorly abstracted code where I have to jump across 5+ different files to figure out what a function is doing; with AI, I can just ask it to read all the relevant code across all the files and tell me WTF the spaghetti is doing... Then it generates new code which 'follows' existing 'conventions' (same level of mess). The AI basically automates the most horrible aspect of the work; making sense of the complexity and churning out more complexity that works. I love it.

That said, in the long run, to build sustainable projects, I think it will require following good coding conventions and minimal 'low code' coding... Because the codebase could explode in complexity if not used carefully. Code quality can only drop as the project grows. Poor abstractions tend to stick around and have negative flow-on effects which impact just about everything.

m3kw9yesterday at 10:01 PM

Well, it’s still a prompt

ninetynineninetoday at 4:48 AM

We do enough "context engineering" we'll be feeding these companies the training data they need for the AI to build it's own context.

croestoday at 4:29 AM

Next step, solution engineering. Provide the solution so AI can give it to you in nicer words

neilvyesterday at 11:09 PM

> Then you can generate a response.

> > Hey Jim! Tomorrow’s packed on my end, back-to-back all day. Thursday AM free if that works for you? Sent an invite, lmk if it works.

Feel free to send generated AI responses like this if you are a sociopath.

show 2 replies
la64710yesterday at 9:59 PM

Of course the best prompts automatically included providing the best (not necessarily most) context to extract the right output.

rvzyesterday at 10:11 PM

This is just another "rebranding" of the failed "prompt engineering" trend to promote another borderline pseudo-scientific trend to attact more VC money to fund a new pyramid scheme.

Assuming that this will be using the totally flawed MCP protocol, I can only see more cases of data exfiltration attacks on these AI systems just like before [0] [1].

Prompt injection + Data exfiltration is the new social engineering in AI Agents.

[0] https://embracethered.com/blog/posts/2025/security-advisory-...

[1] https://www.bleepingcomputer.com/news/security/zero-click-ai...

show 1 reply
intellectronicayesterday at 9:37 PM

See also: https://ai.intellectronica.net/context-engineering for an overview.

kruxigtyesterday at 9:54 PM

[dead]

KodeNinjaDevtoday at 6:51 AM

[dead]

godtierpromptsyesterday at 10:06 PM

[dead]

banqyesterday at 10:17 PM

[dead]

LASRtoday at 12:21 AM

Honestly, GPT-4o is all we ever needed to build a complete human-like reasoning system.

I am leading a small team working on a couple of “hard” problems to put the limits of LLMs to the test.

One is an options trader. Not algo / HFT, but simply doing due diligence, monitoring the news and making safe long-term bets.

Another is an online research and purchasing experience for residential real-estate.

Both these tasks, we’ve realized, you don’t even need a reasoning model. In fact, reasoning models are harder to get consistent results from.

What you need is a knowledge base infrastructure and pub-sub for updates. Amortize the learned knowledge across users and you have collaborative self-learning system that exhibits intelligence beyond any one particular user and is agnostic to the level of prompting skills they have.

Stay tuned for a limited alpha in this space. And DM if you’re interested.

show 1 reply