news

Lovelace emerges from stealth with context engine that claims 1000x AI investigative power

The New Stack
The New Stack // Darryl K. Taft
Chip with intelligence wave and dots

Lovelace founder Andrew Moore says enterprise AI keeps failing investigative tasks. His answer is a context engine that claims 1000x the investigative power at 1/1000th the token cost — built for environments where being wrong is not an option.

Andrew Moore has spent two decades watching enterprise AI projects fail the tests that mattered most. Not the demos. Not the chatbots. The hard ones — the investigative questions that require joining millions of dots across petabytes of data in environments where a wrong answer can cost a fortune, a freedom, or a life.

Now Moore, the former head of Google Cloud AI, former dean of Carnegie Mellon University’s School of Computer Science, and the first AI advisor to US CENTCOM, is coming out of stealth with Lovelace AI and its flagship product Elemental, an enterprise context engine builder designed to fix what he says is the core reason high-stakes AI deployments keep failing.

“Throughout my career, I’ve been driven by a simple question: how can we use advanced intelligence to help people make the right decision when the cost of being wrong is catastrophic?” Moore said in the company’s launch statement. “AI has extraordinary potential in investigative contexts – but only if it unambiguously helps humans make better decisions.”

The problem with investigative questions
Moore said he draws a sharp distinction between what large language models do well and where they consistently fall short. Summarization tasks, conversational queries, and basic research all work. But what he calls “investigative questions” are a different animal entirely, he says.

“A person responsible for loans at a large bank might want to ask, ‘Is there any reason that the news today means I’ve got to be worried that some of the collateral on the loans I’ve agreed to is looking flaky?'” Moore said in a briefing with The New Stack. “To answer that, your large language model is having to look at and touch millions of pieces of information.”

This same problem shows up in national security contexts, he said in the briefing. Moore described a customs inspector trying to determine, from thousands of ships entering port weekly, which ones most likely conceal weapons or human trafficking violations — a question that requires aggregating and cross-referencing enormous volumes of behavioral, cargo, and ownership data in near real time.

Current LLMs choke on this class of question, Moore argues, because they are forced to search a massive wall of data inefficiently, burning tokens and time without the structured context needed to reason reliably. Standard retrieval-augmented generation, or RAG, helps but only goes so far, he says.

“RAG is very effective, but for many investigative questions, you really need information aggregated and searched over millions of source documents,” Moore told The New Stack. “That’s the difference: RAG lets us do hundreds; a multi-terabyte context engine lets us do millions.”

Read full article

Share article

Media inquiries