Watching the flippening on Twitter with Streamr and Oraclize

We’ve met with the Oraclize people a few times, and we love what they do. There are also important synergies in what Streamr and Oraclize can do together. Let me explain, and show an example.

For fun, and to demonstrate Streamr and Oraclize working in tandem, we built a little example that watches the flippening (Ethereum potentially overtaking Bitcoin as the leading blockchain) in real-time. This idea was inspired by, which lists many metrics such as market cap and number of transactions, but none related to social media. So we chose to add some analytics using social media data. Using the streaming Twitter API and Streamr we build a process that listens to raw tweets mentioning either Ethereum or Bitcoin, counts the tweets in various timeframes, and makes the results available from our API. Later in this post, we’ll query this data from a smart contract.

Just to show you what the data currently looks like, here’s a table of real-time Twitter statistics, as received from our streaming API:

Window Ethereum tweets Bitcoin tweets Flippening
1 min
1 hour
24 hours

In the Streamr Engine, there’s two ways to convey data to smart contracts. The first method is event-driven: sending transactions directly from the Canvas via the EthereumCall module (see this or this example). The second option is to query for data from our web API when needed. This is preferable in use cases where a smart contract needs the data upon request instead of being constantly notified about the newest data.

Oraclize can easily facilitate this. They offer a request-response bridge between the blockchain and any web API such as Streamr’s. A smart contract can call a function on the Oraclize smart contract in order to request for data from a source such as an URL. Their off-chain system watches for these calls, and when one occurs, they go ahead and fetch the required data, and send the response back to the requesting smart contract by calling its callback function. Oraclize can even generate a proof if required, cryptographically showing that the data really came from our web API.

The results are being calculated from raw tweets using a canvas that counts the tweets in 1 minute, 1 hour, and 24 hour rolling windows and produces the results to a new stream. The result stream can be listened to by external applications, similarly to how this web page subscribes to it to show the table above. However, events in the stream can also be queried via our HTTP API. The following URL returns the latest event in the stream:
To enable a smart contract to get the latest data on demand (for betting purposes, for example 😊), you can use Oraclize. Below is an example for Ethereum, written in Solidity. It requests Oraclize to fetch the current statistics from the Streamr API by calling the oraclize_query function. The result is soon thereafter delivered to the __callback function by Oraclize:
pragma solidity ^0.4.0;
import "";

contract StreamrFlippeningDemo is usingOraclize {

    string public latest;

    event newOraclizeQuery(string description, uint256 fee);
    event newFlippeningData(string data);

    function StreamrFlippeningDemo() {
        // update();

    function __callback(bytes32 myid, string result) {
        if (msg.sender != oraclize_cbAddress()) throw;
        latest = result;

    function update() payable {
        uint256 fee = oraclize_getPrice("URL");
        if (fee > this.balance) {
            newOraclizeQuery("Oraclize query was NOT sent, please add some ETH to cover for the query fee.", fee);
        } else {
            newOraclizeQuery("Oraclize query was sent, standing by for the answer..", fee);
            oraclize_query("URL", "json(");

Streamr Editor is a low-code environment that allows users to build data-driven processes visually using drag and drop. Below, you can see the Canvas that counts tweets for the various time frames on both source streams, and produces the result into another stream (click here to open in full screen):

Take a look at our white paper draft if you want to dig deeper into our stack. In the draft we explain how streaming data can be tokenized and traded in the peer-to-peer Streamr Network powered by a token called DATAcoin. As all data in the Streamr Network will be signed at the source, smart contracts always receive trusted data from the real world. In this scenario, Oraclize can act as a valuable request-response bridge which natively supports data queries from the Streamr Network.

This simple example is just a taster, but it should immediately help you get started in building data-driven smart contracts. There’s so much more that can be achieved using Streamr and Oraclize either together or separately, depending on the use case. We’ll be hard at work in making the data-driven decentralized vision a reality!

Questions and comments about this post and Streamr in general are appreciated! Join us on Slack, and of course feel free to follow us on Twitter as well.

Golem + Streamr = ♥

A fascinating blog post written last year by Golem Factory CTO Piotr Janiuk describes an imaginary application called Hyperbrick, which allows users to combine microservices into workflows. By happy coincidence, the Streamr Engine and Streamr Editor, as presented in our DATAcoin white paper draft, is remarkably close to Golem’s example of an ideal application in their ecosystem. This blog post is the direct result of having our minds blown while considering the overall implications of this space and its emerging tech.

A quick reminder:

  • In Streamr, “workflows” are called Canvases; they consist of modules that operate on incoming data.
  • The Streamr Engine that powers it all can easily be made to run inside a Docker container, which is natively supported by Golem.

Right now, the Streamr Engine (i.e. the component which processes our data feeds) runs on more or less any computer, most often in a cloud environment. However, running in the cloud means that a centralized commercial entity like Amazon or Microsoft controls the server hardware, network, storage, and price of computation. They are also able to see who is doing what quite plainly.

In our vision, the data transport from sources to Streamr Engine nodes running on Golem would happen via the Streamr Network. This is a peer-to-peer network which ensures the data gets delivered and that it can be trusted. With Golem, the Streamr Engine, and the Streamr Network working together, computation in real-time analytics use cases could be completely decentralized, and the pricing would be determined transparently by the market. Security, privacy, and quite possibly pricing, would be vastly improved.

The current prototypes of Golem support tasks of finite computation. This works well for large, batch-oriented computation such as 3D rendering. However, for more fine-grained computation such as hosting microservices or stateful apps, support for long-running tasks/processes is required. The same goes for more obvious matters such as access to networking.

Happily, these features are indeed on the Golem roadmap. When available, these features will allow Golem to host processes such as the Streamr Engine, potentially processing hundreds of thousands of incoming requests or data points per second on a single node, and many millions on a cluster of Golem-Streamr-nodes.

As best we can tell, Streamr on Golem could bring about incredible use cases in the domains of algorithmic trading, asset tracking, anomaly detection, predictive analytics, and much more. Needless to say, we look forward to begin experimenting with Golem, and to the world of decentralized distributed compute in general. Three cheers to the #truecloud!

Questions and comments about this post and Streamr in general are appreciated! Join us on Slack, and of course feel free to follow us on Twitter as well.