×
all 45 comments

[–]meltingintoice 46 points47 points  (1 child)

This was a really good bestof depthhub, Thanks for posting. It reminds me of reading Godel Escher Bach -- discovering that there is a particular way of dismantling grammar that allows one to solve new problems.

I think the magician analogy is a great one, because half the time when I learn what the magician's trick was, I don't think of the magician as clever but more of a cheater (e.g. He pretends to saw off someone's arm, but they were already an amputee. Or he pretends to put a padlock on the chest, but the bolt he inserts the lock into is only Velcro-ed on.)

I used to work in election security, and this kind of thing used to keep me up at night. For example, when we got new electronic voting machines and they made a big deal about testing them to make sure they couldn't be accessed remotely -- but there was no way for the officials to review the code that tallied the votes, to make sure it hadn't been tampered with at the factory

[–]cycle_schumacher 16 points17 points  (0 children)

This was a really good bestof

Fwiw this is depth hub, not best of. A very common mistake on this sub though.

[–]Zafara1 65 points66 points  (15 children)

As someone who works in the Infosec industry, the explanation here is the best I've ever seen to explaining this hidden mindset that we ourselves use for both attack and defence.

It's also key to understanding fundamental difference between an developer and a security perspective. Being able to see past the abstractions allows us to do our jobs effectively, as well as for hackers to do theirs. While being able to rely on abstractions is how developers are able to efficiently produce solutions. This can cause a lot of troubles for both sides of not being able to understand each other, thinking on fundamentally different layers of abstraction.

It can be really hard to explain this mindset, so I would absolutely love to use it for future presentations (With /u/gwern permission ofc).

[–]AncientSpacecraft 21 points22 points  (1 child)

You would probably enjoy the book Silence On The Wire, if you've never read it. It's one of those rare texts that can transmit an abstract viewpoint, and this is the mindset it teaches you. I know I've definitely used similar definitions to this before, where Gwern surprised me is with the observation that the thing security people do is the opposite of what good programmers and mathematicians do. That somehow escaped me, but makes a lot of sense and does explain why a lot of programmers tend to be so bad at thinking like this.

[–]Zafara1 5 points6 points  (0 children)

Ah! I'd read that Blinkenlights sample chapter years back but didn't buy the book. Will definitely pick it up now, thanks.

And agreed about the mathematicians and programmers too. In the field of computing/maths as a general that's how it has to be to create progress, at a certain point you just have to move away from the low level concepts and trust that the abstractions are 100% sound, that way you can think about the results of the abstractions rather than the process to create those abstractions, saving substantial amounts of time. While a hacker just has to find a single break in the chain from beginning to end, which means effectively trusting that abstractions can never be 100% sound.

[–]gwern 32 points33 points  (3 children)

I'm pleased that it rings true for you; I've been pondering how one could define a 'security mindset' ever since Bruce Schneier gave his antfarm example, and I think I've finally gotten it right.

Go right ahead.

[–]redpandaeater 1 point2 points  (2 children)

Though if you've found some way to make a CPU work without running any opcode, you should publish that because it's probably Nobel worthy.

[–]gwern 4 points5 points  (0 children)

Well, people have gotten it down to one opcode, which doesn't (supposedly) do any computation and just copies data (mov), and you definitely have to think hard about what exactly one should call ExSpectre or how many opcodes it 'executes'...

[–]hxka 2 points3 points  (0 children)

Well, there's this. (Talk)

[–]Quakespeare 25 points26 points  (5 children)

/U/Gwern is a fantastic writer. If interested, you should check out his blog, where he writes on a wide breadth of topics:

https://gwern.net/

[–]penpractice 16 points17 points  (4 children)

For those who don't know, /u/gwern is the nom de plume of a group of 10-12(?) elite former-Soviet scientists who all post their research over at gwern.net.

[–]thegreenfern 11 points12 points  (2 children)

Are you serious? I was under the impression it one person. I'm probably being had right now huh?

[–]PM_ME_UTILONS 10 points11 points  (0 children)

I think he's actually the cover identity of a rogue NSA AI that has escaped and is living wild on the internet.

[–]penpractice 13 points14 points  (0 children)

It's the only way I can make sense as to how he's able to write about so much. So I'm pretty sure it's true.

[–]Cyaed 0 points1 point  (0 children)

Citation? I'm willing to believe you, but it's an interesting anecdote and therefore suspect.

[–]psychometrixo 2 points3 points  (0 children)

As a developer whose eyes have recently been opened to this way of thinking, it was really useful to me already

That is to say: it seems like a useful argument to bridge the gap

[–]dolphone 0 points1 point  (0 children)

I also work in infosec and I feel it's utter rubbish, at least the computer part. A hacker very much sees the program they're attacking in terms of its basic structures (buffers, for example) and leverages those and its weaknesses to attack.

OP is trying to make it sound like a "see the matrix" deal when it really isn't.

[–]blue_strat 23 points24 points  (15 children)

Compared to cutting a hole in the wall, isn't lockpicking closer to the laminated pass analogy? The lock is meant to only open with a key, but will still open if you imitate the key.

People also don't expect anyone to cut a hole in the wall because it raises a ton of alarms that are likely to impede your exit. Not to mention the electrical wires and such that might be in the wall. Bill Mason wouldn't have been quite so sarcastic about the relative security of a wall if he'd been electrocuted.

The Israeli example is really odd since presumably they are trying to capture territory, but that territory is more of a liability when you've blown all the buildings to pieces. This sort of thing is also what creates the next generation of insurgents. It's simply short-sighted idiocy.

Imagine it—you’re sitting in your living room, which you know so well; this is the room where the family watches television together after the evening meal. . . . And, suddenly, that wall disappears with a deafening roar, the room fills with dust and debris, and through the wall pours one soldier after the other, screaming orders. You have no idea if they’re after you, if they’ve come to take over your home, or if your house just lies on their route to somewhere else. The children are screaming, panicking. . . . Is it possible to even begin to imagine the horror experienced by a five-year-old child as four, six, eight, twelve soldiers, their faces painted black, submachine guns pointed everywhere, antennas protruding from their backpacks, making them look like giant alien bugs, blast their way through that wall?

Edit: Now let's watch theory push out application, because there is no spoon...

[–]Zafara1 17 points18 points  (4 children)

So these are interesting questions to ask, and a very quick detour at the end there? Are you talking about the Israeli example from the Nakatomi space article?

In regards to your other points, hopefully because I work in infosec I can give you my industry relevant perspective:

Compared to cutting a hole in the wall, isn't lockpicking closer to the laminated pass analogy? The lock is meant to only open with a key, but will still open if you imitate the key

Yes, it is. But the key (HarHar) point that youre missing there is that you're still narrowing your possible scope by using a certain layer of abstraction, and not abstracting the system and your objective away to what they really are.

The point of the analogy there isn't just that you're able to see that part of the chain can be imitated to break the security, it's the ability to see and recognise that the chain doesn't even need to be broken, but can be bypassed completely by making a hole in the wall/ceiling. It's being able to step back from the locking system entirely and realise that your objective isn't to break the lock, your objective is to reach the area behind the lock, and meeting that objective could be done by picking to lock, or it could be done by imitating blue plastic, or it could be done by removing the lock entirely from the equation either by physically removing the door/lock itself or the other barriers that exist (Walls).

It's an often used metaphor in security that you've bought a $10,000 lock to secure a room with cardboard walls. From the top of my head, the first examples of this I can think of are some tamper evident cardboard boxes. More than a few companies sell cardboard boxes with tamper evident seals on the box lids that are supposed to leave behind visual residue if the seal is broken or tampered with, this way when inspected, someone can tell if someone has potentially altered the material inside. Except they're normal cardboard boxes otherwise and can simply be unfolded from the bottom, bypassing the seal entirely... fancy lock, cardboard walls.

People also don't expect anyone to cut a hole in the wall because it raises a ton of alarms that are likely to impede your exit. Not to mention the electrical wires and such that might be in the wall.

Which is also why it works, because it is unexpected. And those things you have listed are all about knowing that the possibility exists in the first place, that the possibility of cutting through a wall is even an issue. This is why it's so important to have attackers regularly assess your security systems, and to have those systems designed by people with a hackers mindset. Once those issues are identified, you then have to perform a cost/risk assessment to determine what and how much needs to be put in place to lower the risk to an acceptable level. No lock is impenetrable, and at what point do men with guns and explosives just come in and blow the door out completely? For your bike shed, not very likely. For a uranium storage facility? An entirely possible reality that has to be accounted for. But a $100,000 lock to protect a $10 bike is as silly as $10 walls protecting a $100,000 bike.

Even right now, you've neglected to mention the ceiling approach, because you're making the decisions on this security system, your room is now easily accessible via the ceiling, which has plenty of room for electricians to navigate through, so plenty of room for adversaries. Now it's all part of a risk that an attacker has to decide on to achieve their objective, is it more necessary for them to get in and out of this room as quickly as possible, without care for post-compromise evidence and the lock will slow them down too much? Or is it necessary for them to complete the operation without leaving a trace?

How does anybody cut through a wall without getting electrocuted? Do you switch the power off first? And we only have one shot if caught. So really what you're saying is that rather than picking that complex unknown lock that could take hours? We could have a person inside the facility, ram the generators, punch a hole in the wall, and get what we need and get out while the panic is still in effect. The risk is 1-2 operators could be killed, and a delayed time until the guards realise it's an attack and something has been stolen, or we risk losing 10-20 operators in a military style assault to capture and retrieve.

[–]gwern 12 points13 points  (9 children)

I'd say your lockingpicking example is still missing the point. You phrase it as 'imitating the key' (emphasis added), but you've already fallen for the 'key' semantic illusion: that there is a specific key you must copy in order to operate the system as intended. No, you aren't 'imitating the key', you are attempting to put the atoms which are the cylinders into a particular physical arrangement by any means possible.

Hence, bump keys - which do not 'imitate the key' by trying to reverse-engineer the cut levels or adjust each cylinder to where they should be, but simply brute-force bounce all systems into the desired state.

[–]blue_strat 3 points4 points  (8 children)

If you don't want to subscribe to the "key" paradigm, why use a bump key when drilling the lock would have the same outcome? Oh, because you might want to leave the lock intact for various reasons.

Defeating a system doesn't have to mean ignoring its precepts entirely: there is overlap between WX and YZ. The examples in the two links (rather than your associated comment) show a determination to ignore the system as commonly perceived and take a surprising path just for the sake of it, regardless of how effective.

[–]gwern 9 points10 points  (7 children)

I don't see how that's an objection. Desiring to leave the lock intact is not inconsistent with the hacker mindset, it is merely another constraint. And you are still falling for semantic illusions: why think that it is 'the lock'? You could drill the lock, and then replace it with another when you leave! Hacking object permanence like that - 'surely it's the same lock as when I walked by the room last time, QED, the room is secure' - is the stage magician trick par excellance.

show a determination to ignore the system as commonly perceived and take a surprising path just for the sake of it, regardless of how effective.

Describing a successful burglar and military operation as 'taking a surprising path just for the sake of it' seems unjustified. How do you know those were not the best routes under the circumstances? You weren't there, and you're neither a professional burglar nor an Israeli special ops soldier.

[–]blue_strat 1 point2 points  (6 children)

Your argument, then, is that no matter how stupid a course of action appears from what we know about it, because it happened to work (see: survivorship bias), the mindset that led to it must have been the perfect kind that accounted for all possibilities, rather than an imperfect kind that could have been achieved by a drunk teenager?

[–]gwern 10 points11 points  (3 children)

I'm going to say that when a tactic worked in practice for successful professionals and are similar to highly successful tactics in the same industries and elsewhere, it's going to take more than some vague freefloating sneering on the Internet to convince me that that tactic was actually a terrible idea.

[–]blue_strat 1 point2 points  (2 children)

Successful professionals? Bill Mason claimed to be a thief when he was selling a book about it. He claimed in a CNN interview to have stolen millions from people he specified. Some skepticism is healthy.

The Israeli military op wasn't successful as far as we know beyond managing to detonate their explosives. Plenty of military campaigns have achieved this and seen to be failures.

Some vague freefloating approval on the Internet is about as convincing as any opposite sneering, but has the disadvantage of blithely encouraging behaviour in the hope that it was well thought out.

[–]gwern 6 points7 points  (1 child)

Neither of your objections actually establishes any reason to disbelieve them, and remain freefloating sneering and goalpost moving.

[–]blue_strat 5 points6 points  (0 children)

That's a lot of faith to put behind something you didn't even use as examples.

[–]Zafara1 0 points1 point  (1 child)

Your argument, then, is that no matter how stupid a course of action appears from what we know about it, because it happened to work

Yes actually. Because if it works, it works. Why does a thief care about being seen as "stupid" if the route works. If it achieves the goals there is no need to go further.

And "Stupidity" is relative. It's absolutely more stupid to sit there picking a lock and getting caught in the act then it is to kick a wall down and get out before somebody notices. In one you're a free man, in the other you're serving 20 years in a federal prison.

perfect kind that accounted for all possibilities

Yes, this is how security works. It has to be adapting and it has to account for as many possibilities as they can. Then once you account for these methods, you account for the cost and effort it would take to perform this method, what is the cost if whatever you're protecting is stolen, how can we mitigate or offset the risk, and how much does that cost. Then you can decide where to put your efforts, somebody punching a hole through a wall may be unlikely, but reinforcing a wall is a cheap solution, so why not secure it properly? Especially if you could very well be defending information/assets that in the wrong hands could affect millions of people.

[–]blue_strat 2 points3 points  (0 children)

Where has security been the frame here? The thread is about the hacker's mindset and point of view, and the likelihood of an attack working.

There is a huge disconnect here between support for an analytical approach, and the examples purported to have such an approach.

[–]eof 5 points6 points  (0 children)

Gwen is an incredibly prolific writer, developer and bio hacker. If you enjoyed this I highly recommend his website.