The Next Oracle Problem Is Not Price

For a while now, the purpose served by oracles in blockchain has been clear: how do you get reliable data on-chain? And in an industry that has seen growth and innovation centered around decentralized finance (DeFi), that question became even more specific: how do you get reliable prices on-chain?

That question still matters, and Switchboard has built and continued to pioneer decentralized infrastructure to tackle the underlying problem. But it may not be the only one that matters now.

Price Discovery

Until recently, price discovery has mostly taken place off-chain, with oracles acting as the data layer bringing real-time price data on-chain. However, the majority of market-moving information does not show up as a clean number that you can just stream on a data feed. It shows up as a post, a screenshot, a video clip, a breaking news article, or a rumor moving across social media and group chats. By the time that information is formally verified (or not), the market has already moved.

This compounds an existing problem. For most assets, onchain price discovery has always come second. After all, that is the original oracle challenge: price discovery happens offchain, and oracles pipe prices onto the blockchain immediately after that. Some with more latency than others. But when the inputs to price formation are unstructured, that process gets longer. Information has to be interpreted, acted on, and reflected in price before it can ever make its way onchain. By the time information is clean enough to record, someone has already moved the market somewhere else. In this scenario, the goal of better data infrastructure is not just to give onchain participants a speed advantage, but also to close the structural gap. In other words, to create a verification layer for unstructured information that operates fast enough that onchain markets are working with the same information environment as everyone else, at the same time as everyone else.

Cutting Through The Noise

But verifying unstructured information is hard, and has gotten a lot harder. AI has made believable content much cheaper to produce. Prediction markets and automated trading make narrative distortion more profitable. Social platforms reward whatever spreads fastest, not whatever is best sourced. Combine these together and the internet's failure mode starts to look clearer: too much plausible noise.

In that environment, "more data" is not necessarily "better data." In some cases, it may be the opposite.

That raises an interesting question for data infrastructure. If applications, traders, and AI agents are going to act on real-time internet information, what should count as usable data? Raw access may not be enough. The more valuable layer may be a trust layer: something that helps separate primary evidence from repetition, organic attention from coordinated amplification, and genuine signals from incentive-driven noise.

Evaluating Signals

One way to think about it is that the unit of analysis may be changing. A price feed is structured. Internet-native information is not. A hundred accounts repeating the same claim should not count as a hundred confirmations. A viral post is not "true" just because it spreads. A source that is reliable in one domain may be unreliable in another. None of that fits neatly into the old model of simply transporting data from point A to point B.

So maybe the next version of this problem is less about moving data and more about evaluating it.

What would that mean in practice? Probably not a single truth score handed down from above. More likely, it would mean combining a few different signals: how close a claim is to a primary source, how many genuinely independent confirmations exist, how a source has performed on similar claims in the past, and whether a narrative appears to be spreading organically or through coordination. The output might be less "true" or "false" and more "corroborated," "unverified," "contested," or "likely amplified."

That may sound like a media problem, but it feels increasingly like a market problem. If even brief bursts of misleading information can move sentiment, shift probabilities, or trigger automated systems, then trust becomes part of the data itself. It's no longer just about freshness or uptime, but also about provenance, calibration, and resistance to manipulation.

Oracles In A Changing Data Landscape

This is one reason the idea feels relevant to oracles, even if it sits outside the traditional frame. Oracles originated as infrastructure to bring external facts into programmable systems. But what counts as an external fact is getting messier. And programmable systems are becoming smarter. AI agents are already making decisions, executing transactions, and acting on data autonomously. That raises additional questions that sit just beyond the edge of the current oracle frame: if you can't fully trust the data an agent is acting on, can you trust the agent's actions? And how can you verify those actions without exposing them?

While we do not have a neat answer to it yet, this seems like a worthwhile question to try to answer right now: in a world of AI-generated noise, coordinated narratives, and increasingly automated decision-making, what should a high-quality data feed actually look like?

Maybe the next oracle problem is still about truth. Just not in the relatively simple, numerical way we are used to.