accounts with more than 10,000 followers should at least need two people to change key settings
For accounts that could start a war this might be necessary, but for celebrities with >10K followers this sounds expensive and unnecessary to me.
To me, it seems like you could instead ensure the admin view of every account has a timestamped log of recent settings changes, including changes done by admins, with a link to the profile of the admin responsible, and a button to suspend that admin account with one click.
This way, the security team could've seen that Elon Musk's account had just been reset by J. Random Employee minutes before tweeting the suspicious bitcoin tweet, messaged J. on Slack to be like "hey did you do that?", and suspended the compromised admin account within minutes.
Sure, some accounts might be briefly compromised initially, but it would be resolved in minutes and not the hours that it took Twitter, right? That seems fine for what should be a relatively low-likelihood, high-expense attack like compromised admin account (of course, you have to ensure that is the case).
If twitter ‘verified’ means anything, it means a chain of identity has been established between Twitter and the purported owner of that account. That chain should be documented somewhere - there must be some record in the ‘verified account management’ system that says something to the effect of ‘after we gave this actual verified human this token, this email from this address arrived on this date containing that token, establishing that this person had control over that email address on that date’.
If random twitter admins can change the email address and disable 2fa on verified twitter accounts, and those accounts can still publish tweets without them going into a ‘held pending verification that the blue check mark still applies to the person now in control of that account’ queue, then twitter verification doesn’t mean much.
Which might be of interest to organizations like the SEC who hold that communications over a verified twitter account count as official corporate notices, and various public safety organizations that have let it be known that messages on their twitter can be relied on as a source of official information during natural disasters...
This is exactly the problem with the blue tick. It's basically meaningless other than as a budge of honour. It's also restricted to large companies and 'public' figures.
What I'd like to see is, the Blue Tick being restored to be an actual mark of Verification, and be something that anyone can apply for with the appropriate identification documentation.
Additionally, there should then be a toggle switch, where only Verified accounts see tweets and replies from other Verified accounts[1]. This would effectively create two Twitters; one where every account is identifiable and accountable for what they tweet, and another that continues with the anarchic system they have now, where hate speech, racism and intolerance run rife[2].
---
[1] I've heard rumours that this toggle switch already exists on Verified accounts - can anyone confirm ?
[2] Yes, free speech may be trapped here too, unless some sort of middle ground can be worked out.
I think the Yonatan Zunger had the right solution to this: Verify Facts Not People.
Replace the checkmark with a Verified Badge that says their position if they're an elected official or major organization leader, or just Real Name if they've verified their ID.
He wrote this in response to the Jason Kessler kerfuffle, well before the "factcheckUK" stunt, but it would have actually solved that too! Imagine seeing the username "factcheckUK" with the Verified Badge "Conservative and Unionist Party, UK 🇬🇧".
Sounds just like the 'real names' policy that Google and Facebook have tried before. That never made any difference to hate speech, racism and intolerance, so why do you think it will magically make Twitter better?
I can imagine at minimum this would help with bots. Considering the problems you state are actual systematic issues we have in our society I would expect that the verification process should not be perceived as working towards solving those.
I have no idea how comment bots run today, but for anyone only slightly invested, it would mean one additional hurdle (get accounts for actual people who are not using the service), but not a blocker (like captchas: they mostly serve to annoy regular users).
That seems like a pretty large hurdle, compared to today where it seems they're just creating as many accounts as they want. There isn't a single political tweet without a bunch of bot garbage as replies. I assume any name with 8 digits at the end is a bot and an auto-block (and these are all over political tweets), but there are so many more tweets that are highly suspicious once you go in and look at their feed.
Because the world is in a different place now. As long as twitter doesn't make it mandatory, it should work. Those that want a bit of decency on Twitter can get verified and have the knowledge that the people they converse with are who they say they are, and those that do not can carry on using Twitter in the same way they always have.
Do you really think blue check marks are ‘meaningless’? Are you thinking of them in the context of, among your peers who has blue checks and who doesn’t being somewhat arbitrary? Because for sure if you’re part of a professional community that is common - you’ll find academics and journalists and medical professionals and so on all have very random experiences with blue check marks, much like tech does.
But at the same time, in aggregate, blue check marks do provide some degree of legitimacy to accounts - yes, this account is that person you know from outside twitter.
But twitter don’t do a great job of explaining how they verified an account, or even who they verified it to be. As has recently gone viral, twitter gave @sistersofmercy a blue check mark, because they really are the catholic institute of that name - not the band (@tsomofficial doesn’t have a blue check mark ...)
I think there’s the basis of something interesting in ‘verified accounts’ - and for sure they’re flawed - but I don’t think ‘meaningless’ is correct.
What do you think of Yonatan Zunger's proposed fix, to verify facts and not people?
So @sistersofmercy might get a Verified Badge saying "Religious Institute, Ireland", and @tsomofficial might get a Verified Badge saying "Musical Group, UK".
He wrote this in response to the Jason Kessler kerfuffle, well before the "factcheckUK" stunt, but it would have actually solved that too! Imagine seeing the username "factcheckUK" with the Verified Badge "Conservative and Unionist Party, UK 🇬🇧".
This problem has been solved for decades - cryptographic signatures. Twitter and their users are uninterested in a real solution, they just want engagement.
As twitter only has about 5k Employees having more than 1k ie over 20% with access like this is shocking and the fact that "j random "contractor" has access even more so.
Twitter needs to get serious period and not just blue checks.
Also a lot of the other FANG type companies are effectively CNI - I think they need to start properly vetting people and I mean real security clearance possibly including TS
> held pending verification that the blue check mark still applies to the person now in control of that account
That sounds to me like it's simply having a 2nd person verify the email change is correct. So your suggestion and the article's suggestion ("should at least need two people to change key settings") seem to be very similar if not the same as each other.
Process can be automated depending on how the account needs to be verified. Might be that say for a brand account it is connected to a domain ownership/verification model, where 1) email address must be in a particular domain, and 2) email must be able to complete a challenge/response process that demonstrates they control the company website - eg some random value is sent to the email address and they have to make it available via an https resource on the company domain.
For personal celebrity accounts maybe verification is just that the email address must send in a scan of a government photo Id, and sure maybe that triggers a manual check, but it’s not just a ‘two keys’ solution - it’s a verification process.
Sure, the same admin console might be able to go in and change the verification mode and rules on a verified account - but that’s something that would happen rarely, and a flurry of activity changing the verification process for a bunch of accounts would definitely merit a red flag.
I guess that's plausible. But on the other hand Twitter got plenty of red flags with this attack but still struggled to stop it. And is our goal detection or prevention?
This is all very true, assuming that was the purpose of the blue check. But it isn't. It's a status symbol that people also see as adding legitimacy to the tweets of an account.
pending verification that the blue check mark still applies to the person now in control of that account
Why would you want to ever allow an admin to transfer control of a verified account to an unverified person?
If you're saying that the account recovery process needs to be at least as secure as the credential that was verified (e.g. email address), then I agree. But I don't think reversion to a "pending" state would ever be desirable, though.
Except that the problem isn't the signature itself, it's the required infrastructure. Grandma doesn't know how to check The Donald's signature. And, of course, the infrastructure is hard (just check the unfixable problems with the PGP persistent DOS attacks that were discussed a year or 2 ago).
I once had to restore my Authy 2FAs from a backup, and didn't have access to the original device.
Restoring it took 24 hours, during which I got bombarded with text messages and emails warning me that someone was restoring my backup, and that if it wasn't me, I should immediately click or reply to prevent it from happening.
Seems like that might help - a 24 hour waiting period on any significant account changes for verified accounts.
This method is actually used by many countries, most of them in Africa, where thanks to MPESA and such the need of protection against SIM swapping is even higher, since your SIM is literally your bank account.
I actually had a similar idea for fighting SIM swaps—we should be able to ask telecoms "hey, when's the last time this phone number was moved to another device/changed IMEI numbers?" and distrust the number if it's been changed less than 48 hours ago.
I've looked but as far as I can tell, such an API does not exist, alas.
We (NL) do have such a thing where banks get notified of sim swaps or number transfers. They put the number on hold for 2FA pending authorization of that change with a different (Maybe second) means.
The problem is with POTS, you dont have that kind of capability in the protocol, even Caller ID cannot be verified. Most network will trust whatever is being sent. It is like SMTP it was designed in era where security was simply not there.
(It's true there are a bunch of important things that PGP has helped solve. Ubiquitous person-to-person secure communication and person-to-service cryptographic authentication are not amongst them. PGP is certainly usefully employed in some niche use cases, but it has failed at pretty much all it's original goals. I can't remember that last time I used it for anything except verifying a software download, and even _that_ use case only applies to a tiny fraction of places that hoist software downloads. My Arch linux installs running pacman and silently checking php signatures for me may be the only time I've had PGP code run in maybe a decade...)
Something like Require-Recipient-Valid-Since from SMTP? That would be neat. Does SMS have the necessary protocol flexibility to allow that to be added?
The User Data Header of SMS [0] isn't very flexible, and quite constrained - both it and the message needs to fit inside a 140 byte payload.
There are a handful of bytes reserved for a future purpose, which could be used for something like this, but you're limiting how large the message can be, likely significantly.
That is part of a service that we use, provided for some banks, but requires a lot of integration with the mobile networks and a lot of additional business logic around new sims, old sims used on new accounts, old sims used on old accounts when first set up, etc.
Banks use it for determining whether it is deemed safe to send OTP or other sensitive messages to a mobile. If sim has been swapped recently, they may then choose not to use text message delivery to prevent potential sim-swap fraud.
There may be a bit of an hyperbole in the expression "accounts that could start a war": there are indeed accounts of people who could start a war, yet I fail to imagine how a single tweet, or a few tweets, by some hacker could actually start a war. Escalate tensions, sure. But I assume world leaders and their advisors don't rely (solely) on tweets before calling the cavalry.
Sure, how about we dial the hyperbole down a bit, to "accounts universally known to be a primary mechanisms for announcement of international policy by the leader of a country which has started 12 'armed conflicts' in the last 20 years (or 14 if you count them doing it twice in Iraq and Lybia)"?
I find it quite horrifying that elected officials are legally allowed to use totally unaccountable social media platforms to communicate policy to the public.
>I find it quite horrifying that elected officials are legally allowed to use totally unaccountable social media platforms to communicate policy to the public.
It also creates a number of issues. If twitter decides to ban me, doesn't that impact my right to contact my representatives via twitter (especially since a judge has already ruled that a government official blocking a person on twitter does violate their right). Seems that the government should only be allowed to use a platform for communication if that platform is treated as a public square that all can access regardless of past history, same as public squares of the past. This isn't putting a limit on Twitter, they are a private company and can do what they want. This is putting a limit on the government. Now, if Twitter helps the government establish such an account, then they would be a private company choosing to open itself up as a public square and, specifically with regards to the parts that are a public square, losing some of the rights of a private company (they can't ban you from the public square, but they can ban you from anywhere else as the rest of the site isn't part of the public square).
Mostly, I find it horrifying that we would elect an official whose grasp of diplomacy is so poor as to continually use completely unfiltered channels, with little grasp of the effect of such communications.
The ultimate check on the behavior of elected officials is supposed to be the voters. You shouldn't have to have laws to prevent them from making bad choices. In this case, the voters like their speech to be "tell it like it is", which mostly means hating the same people they hate. They're getting what they asked for, and are likely to ask for it again.
Yes. It might still be mirrored on Twitter, but at least that would not put it as authoritative source and the people at the trigger would've a page too look beforehand.
Next thing, some will demand that all communications with the government happen over twitter too (or another private entity), thus forcing people to have accounts on them.
There are already organisations that have to control employee access to ‘customer’ data very tightly. Law enforcement. Law enforcement agencies have access to large databases full of people along with a huge amount of very sensitive data (both confidential personal data, and stuff like information about ongoing and typically covert investigations).
I’ve worked with several of these types of organisations and the ones that actually want to manage that access well, typically do a pretty good job (though some of them actually want to do it badly, or just don’t care).
Locking down access to sensitive admin consoles isn’t tremendously difficult, requiring additional approval workflows for highly sensitive operations isn’t particularly difficult, and in that context non-repudiation is rather simply to address.
The EU doesn’t provide any standards at all relating to information security. It only specifies that security controls must be ‘appropriate’, but no definition or precedent for what that means. Customer service and community moderation staff accessing customer data, or having administrative control over their accounts would certainly not be a violation of EU law.
Parent was probably referring to GDPR, which (IIRC) mandates that employees only have access to the information strictly necessary for their position. You doctor's secretary should only have access to your appointment schedule and phone number, not your medical condition.
Well yes, being able to give priority to at-risk patients would fall under necessary use. I doubt the receptionist has your actual medical history, but they would certainly see an indicator of your risk category.
There is no such thing as “necessary use” and the GDPR does not specify that an organisation must restrict employee access to personal data to only those whose access is “strictly necessary” (the cookie law contains that phrase, but in a completely different context).
The only thing the GDPR says that would apply in this circumstance is this:
> processed in a manner that ensures appropriate security of the personal data
As I stated above, the EU provides exactly 0 guidance on what “appropriate security” is, and no form of standard at all that it expects you to comply with. To make matters more confusing, an organisation is allowed to take their own budget into account, against the cost of security controls, when deciding what is “appropriate”.
You could ask the question, was twitter appropriately secure? You might come to the conclusion that they weren’t, because they were breached and any system that is breached must not be appropriately secure. That wouldn’t be an unreasonable conclusion, and as far as anybody knows that could very well be the standard that any data protection authority may decide to uphold at any time of their choosing. But then that would lead you to consider that there is no such thing as a system that can not be breached, so in that case there would be no such thing as a GDPR compliant service.
GDPR Article 6 spends a lot of time defining ‘necessary’ use. It says that ‘processing data’ - which is defined very broadly and includes accessing it - is only legal if it is for a ‘necessary purpose’ - either necessary to accomplish contracted work for a customer, comply with the law, or some few other permitted categories.
Combined with, as you say, that GDPR also states as a matter of principle data must be “processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing” - I would take that as meaning you can’t just leave data openly accessible to people who don’t need to process it and rely on them not accessing data they don’t need, you are expected to protect the data against that risk. I.e. secure access to data so it can only be processed for necessary purposes.
It is possible for small organizations that ‘telling Janet she isn’t allowed to look in the customer accounts spreadsheet’ is an adequate control but as organizations get bigger obviously the expectation that technical controls should be in place expands.
I know that people with access to some telecom systems in the UK have to have been vetted some even to DV level - ie the same as if you where working for a TLA.
So having to pass a TS clearance and the whole SF86 form for FANG employees is a possibility "so Elon about your pot smoking habits"
No, according to The Block, @elonmusk repeatedly tweeted the scam at 4:17pm, 5:19pm, and 5:32pm, a span of 90 minutes, and the final scam tweet was at 6:05pm from @KimKardashian.
An hour after @elonmusk's first scam tweet, 7 celebrity or corporate accounts had tweeted the scam, all with the same Bitcoin address. With the two-click system I described, how many compromised admin accounts would you expect the security team to have been able to suspend by then?
8 more celebrity accounts went on to tweet the scam, plus @elonmusk and @kanyewest repeating the scam tweets.
If your database system doesn't have a complete audit log of all fields (most databases have this capability, but more often than not it's disabled), it's possible that the mere act of reverting account ownership might remove data needed for tracing down what happened.
Sure, it's a sucky position to be in, but I can see why they might have been hesitant to dive right in and start trying to undo damage before understanding what had happened.
Replication logs (WAL logs in postgres) contain a complete list of changes to every field. Most big companies keep them as part of a backup strategy. But most wouldn't have the tooling to inspect the logs and see exactly which change was made when during an incident.
In an ideal world, world leaders would all have restrained enough Twitter habits such that anything that inflammatory would be seen as an obvious signal that their account was compromised.
It is not about bad habits, it's a systemic problem.
World leaders are also politicians that need to maintain their internal position (even dictators need to at least prevent a coup). So when they spew inflammatory rubbish it's often to appear tough to their internal audience, often at the cost of the interest of their own country.
This ideal world has to figure out how to deal with this conflict of interest.
Yeah, response time might be slower in the middle of the night, but a falsified tweet on a celebrity account in the middle of the night is also likely proportionally less damaging.
Response time during lunch hour might be slower initially, but after responding to the first compromised account I don't think they'd be any slower.
If an admin account is only supposed to be for use by a human employee, it should have a rate-limit tripwire that automatically suspends the account and alerts the security team.
> ... a falsified tweet on a celebrity account ...
A Hollywood celebrity? A bay area techbro "celebrity"? Or a Bollywood celebrity? Or a British Royal family celebrity? Or a KPop celebrity? Or a Russian oligarch celebrity?
The middle of who's night??? Twitter does exist on the other side of the Bay Bridge you know...
> Just target the attack in the middle of the night
Define 'middle of the night'. Is that Eastern US, Western US, GMT, or CET?
If I assume it's 2 AM pacific (since Twitter HQ is in SF), that's 11 AM in most of Europe, which makes it middle of the day for about 700 million Europeans, many of whom speak English and are interested in US celebrities. And that's still ignoring the majority of the World.
Anything on the internet is 24/7. Musk regularly tweets in the 'middle of the night' and I see those tweets come in while drinking a cup of coffee.
Also, Twitter has offices around the world and people don't collectively log out for lunch at a specific time, so there should always be people available to handle this type of incident. Provided they get the training and tools to do so.
For accounts that could start a war this might be necessary, but for celebrities with >10K followers this sounds expensive and unnecessary to me.
To me, it seems like you could instead ensure the admin view of every account has a timestamped log of recent settings changes, including changes done by admins, with a link to the profile of the admin responsible, and a button to suspend that admin account with one click.
This way, the security team could've seen that Elon Musk's account had just been reset by J. Random Employee minutes before tweeting the suspicious bitcoin tweet, messaged J. on Slack to be like "hey did you do that?", and suspended the compromised admin account within minutes.
Sure, some accounts might be briefly compromised initially, but it would be resolved in minutes and not the hours that it took Twitter, right? That seems fine for what should be a relatively low-likelihood, high-expense attack like compromised admin account (of course, you have to ensure that is the case).