🏠 Reconstructed based on unformatted data provided by Twitter.
It's easy enough to browse the data, but what sort of research findings are there? Here's a few walk-through examples.
Example 1: Similar accounts
People use tools, and their methods often have consistencies.
The archive can be used to find very similar accounts.
Use the button to view all of the foreign influence accounts from the Twitter archive. This will display a large table that shows the account names, creation date, and first tweet time.
If you click on the table heading 'Elapsed', it will sort all of the results by the elapsed time between when the account was created and when it first tweeted. Click it a second time to sort in descending order.
Notice how there are lots of accounts that were created on the same date, and first used at the same time, years later. Start comparing accounts that have similar creation and first-use dates.
For example, there are four Russian IRA accounts that were created on 2013-07-25 and first used on 2015-01-06 (elapsed time: 1 year, 5 months, 12 days). Open each account in a new browser tab:
Look over each of the four accounts. By themselves, each appears to be person interested in the Black Lives Matter movement.
Now look at the accounts as a group. Are there similarities between the accounts?
The 1st tweet from each account was made on the same day (2015-01-06) and contains 4 pictures.
The 2nd tweet from each account was made on the same day (2015-01-06) and contains 2 pictures.
The 3rd tweet from each account was made the next day (2015-01-07) and contains a short saying.
The next few tweets from each account contain random quips. In fact, of the first 25 tweets from each account, tweets 3-25 all contain quips and were all tweeted in a burst -- seconds apart.
Eventually, there is a single political tweet (likely after the account gained a few followers). Different accounts gained followers at different rates.
The more they tweeted, the more political they became.
By the time they stopped tweeting, 3 of the 4 bot accounts were almost entirely political tweets. (The 4th was a combination of political, music, and sports.)
These are not real people tweeting. These are faux accounts that appear to look like real people but that all follow the same programatic pattern.
This isn't the only example. If you look for accounts with the same elapsed time (duration between account creation and first tweet), you might see similar patterns between those accounts.
Example 2: Similar content
Many of the accounts have HTML-encoded text.
For example, many tweets by the Russian IRA "@ChicagoDailyNews" account contain "&" instead of "&". (That's how HTML encodes the ampersand character.)
This is NOT a bug in this recreation engine or in how Twitter stored tweet info. Rather, this is a bug in the scripts used by both IRA and Iran to post automated tweets.
How do we know it's a bot bug and not Twitter? There's some tweets that just contain "&" to mean 'and'. If it were Twitter's bug, they would all say "&".
There are also some instances of '&quot;', which is a double HTML-encoding bug. (One decoding would become " and the second decoding would be a double-quote " character.)
Example 3: Irregular content
There's a few user accounts that have irregular content. Such as account.
This person appears to be America and talks about American social issues.
He starts using "Twitter Web Client", then switched to "masss post5" as he became more
political. His last post used "Tweetdeck" and posted Russian content.
This is an example of a screw-up. The user managing the account posted content to the wrong account. (When you're managing a few hundred accounts, it may be difficult to keep them straight.) There's a couple of screw-up instances -- look for accounts where the Twitter client changes. Each change should be associated with a dramatic change in the faux personality assigned to the account.
Example 4: Scary
After looking over the data for a while, you might notice some really scary aspects. For example:
The accounts are mainly divided into 3 categories: news/journalists (accounts that pretend to be media in order to give their biases more credibility), "people" (fake users), and anonymous groups (these are the minority).
For clusters of accounts, there might be one account that is extreme in one direction, one extreme in the opposite direction, and a variety of accounts inbetween. For example, you might see a strong anti-imigrant account and strong pro-imigrant account in the same cluster. Or you might see a cluster with some pro-Trump accounts and some anti-Trump accounts. These fake accounts are not trying to support one side of any given debate. By playing both sides of a debate, these accounts are trying to create a division.
The elapsed time shows that some accounts were created 4 or more years before being used. This makes them "sleeper accounts". Iran and Russia are playing a very long game, with sleeper accounts that have been waiting years to be activated.