Just Disconnect the Internet
The article argues against the belief that computer systems should be completely disconnected from the internet for security, highlighting the impracticality and complexities of modern business interconnectivity and maintenance needs.
Read original articleThe article discusses the common belief that computer systems should not be connected to the internet for security and reliability reasons, particularly in the context of a security incident involving a vendor. It argues that while the idea is appealing, it is often impractical in modern business environments where interconnectivity is essential for operations. Most business systems rely on communication with other systems, making complete disconnection unrealistic. The author highlights the complexities of defining what "not connected to the internet" means, as there are various degrees of connectivity that can still pose risks.
The piece emphasizes that many systems require network access for maintenance, updates, and adaptability to changing business needs. It also points out the challenges faced by organizations operating in restricted environments, where the lack of internet connectivity can significantly increase costs and complicate software management. The author shares personal experiences illustrating the difficulties of managing software and updates in offline settings, emphasizing that while individual tasks may not be inherently difficult, the cumulative effect can be overwhelming. Ultimately, the article calls for a more nuanced understanding of internet connectivity in relation to security, rather than blanket statements advocating for disconnection.
Related
The IT Industry is a disaster (2018)
The IT industry faces challenges in IoT and software reliability. Concerns include device trustworthiness, complex systems, and security flaws. Criticisms target coding practices, standards organizations, and propose accountability and skill recognition.
Six Dumbest Ideas in Computer Security
In computer security, common misconceptions like "Default Permit," "Enumerating Badness," and "Penetrate and Patch" hinder effective protection. Emphasizing a "Default Deny" policy and proactive security design is crucial.
Never Update Anything
The article critiques frequent software updates, citing time constraints, confusion over update types, and companies pushing paid upgrades. It highlights challenges and issues caused by updates, questioning their necessity.
Dan Geer on CrowdStrike: It Is Time to Act
The article highlights cybersecurity challenges amid global outages, emphasizing the need for integrated security policies, redundancy in systems, and proactive measures to prevent silent failures and vulnerabilities in technology.
Technology's grip on modern life is pushing us down a dimly lit path
A global technology outage caused by a CrowdStrike software update exposed vulnerabilities in interconnected systems, prompting calls for a balance between innovation and security to enhance digital resilience.
- Many commenters emphasize the importance of strict security measures, advocating for whitelisting and controlled access to minimize risks.
- Some argue for the benefits of air-gapped networks, citing examples like Sweden's Sjunet, which enhance security without full disconnection.
- Concerns are raised about the complexity and poor design of software, suggesting that better coding practices could reduce vulnerabilities.
- Several commenters highlight the need for a balanced approach, suggesting limited connectivity or scheduled access to mitigate risks while maintaining functionality.
- There is a consensus that simply disconnecting from the internet is not a viable long-term solution, as it can hinder operational efficiency.
Unfortunately the vast majority of people do their jobs poorly. The entire industry is set-up to support people doing their job poorly and to make doing your job well hard.
If I deploy digital signage the only network access it should have is whitelisted to my servers' IP addresses and it should only accept updates that are signed and connections that have been established with certificate pinning.
This makes it nearly impossible for a remote attacker to mess with it. Look at the security industry that has exploded from the rise of IoT. There's signage out there (replace with any other IoT/SCADA/deployed device) with open ports and default passwords, I guarantee it.
IoT is just a computer, but it's also a computer that you neglect even more than the servers/virtual machines you're already running poorly.
People don't want to accept this, or even might be affronted by this.
There are some places doing things well - but it's the vast minorities of companies out there, because you are not incentivised to do things well.
"Best practises" or following instructions from vendors does not mean you are doing things well. It means you are doing just enough that a vendor can be bothered to support. Which in a lot of cases is unfettered network access.
I guess Sjunet can be seen as an industry-wide air-gapped environment. I'd say it improves security, but at a smaller cost than each organization having its own air-gapped network with a huge allowlist.
Every tool and die shop in your neighborhood industrial park contains CNC machines with Ethernet ports that cannot be put on the Internet. Every manufacturing plant with custom equipment, conveyor lines and presses and robots and CNCs and pump stations and on and on, use PLC and HMI systems that speak Ethernet but are not suitable for exposure to the Internet.
The article says:
> In other words, the modern business computer is almost primarily a communications device.
> There are not that many practical line-of-business computer systems that produce value without interconnection with other line-of-business computer systems.
which ignores the entirety of the manufacturing sector as well as the electronic devices produced by that sector. Millions of embedded systems and PLCs produce value all day long by checking once every millisecond whether one or more physical or logical digital inputs have changed state, and if so, changing the state of one or more physical or logical digital outputs.
There's no need for the resistance welder whose castings were built more than a century ago, and whose last update was to receive a PLC and black-and-white screen for recipe configurations in 2003 to be updated with 2024 security systems. You just take your clipboard to it, punch in the targets, and precisely melt some steel.
Typically, you only connect to machines like this by literally picking up your laptop and walking out to the machine with an Ethernet patch cable. If anything beyond that, I expect my customers to put them on a firewalled OT network, or bridge between information technology (IT) and operations technology (OT) with a Tosibox, Ixon, or other SCADA/VPN appliance.
One food court had kiosks with Windows and complete access to the Internet. Somebody could download malware and steal credit card data. Every time I used one, I turned it off or left a message on the screen. Eventually they started running it in kiosk mode.
Another was a parking kiosk. It was never hardened. I guess criminals haven't caught on to this yet.
The third was an interactive display for a brand of beer. This one wasn't going to cause any harm, but I liked to leave Notepad open with "Drink water" on it. Eventually they turned it off. That's one way to fix it, I guess.
Oof, I feel this one. I tried to get IntelliJ's JRE trust store to understand that there was a new certificate for zscaler that it had to use and there were two or three different JDKs to choose from, and all of their trust stores were given the new certificate and it still didn't work and we didn't know why.
It has interesting limitations due to the amateur radio spectrum used. Including total ban commercial use.
As that is the social contract of the spectrum, you get cheap access to loads of spectrum between 136kHz and 241GHz, but can't make money with it.
Beyond that there are plenty of even more ridiculous examples of things that are now connected to the internet, like refrigerators, kettles, garage doors etc. (I don't know if many, or any, of these things were affected by the CrowdStrike incident, but if not, it's only a matter of time until the next one.)
As for the claim that non-connected systems are "very, very annoying", my experience as a user is that all security is "very, very annoying". 2FA, mandatory password changing, locked down devices, malware scanners, link sanitisers - some of it is necessary, some of it is bullshit (and I'm not qualified to tell the difference), but all of it is friction.
1. These systems shouldn't allow outbound network flows. That will stop all auto-updates, which you can then manage via internal distribution channels.
2. Even without that, you can disable auto-updates on many enterprise software products - Windows notably, but also Crowdstrike itself. I heard about CS customers disabling auto-update and doing manual rollouts who were saved by this practice.
3. Tacking on to number 2 - gradual rollout of updates which you've done some smoke testing on. Just in case. Again - I heard of CS customers who did a gradual rollout, and managed to only have a fraction of their machines impacted.
It may not be the software in question, but proprietary snowflake entitlement management software that has a lot of black box and proprietary voodoo, that does not have any disaster recovery capacity and would be considered tech debt a decade ago... Disgracefully came into life in the year 2021. It did not gracefully recover from clownstrike to say the least.
Or, in the case of crowdstrike. I can imagine support starts to get some calls, at some time you realize that something has gone horribly wrong. An update, maybe not obvious which, is wreaking havoc. How do you stop it? Have you foreseen this scenario and have a simple switch to stop sending updates?
Or, do you cut the internet? Unlike the movies there isn't a single cord to pull, maybe the servers are in a different building or some cloud somewhere. They probably have a CDN, can you pull the files efficiently?
Now maybe by the time they discovered this it was mostly too late, all online systems might already have gotten the latest updates (but even if that is the case, do they know that is the case?).
And I don't think that is enough. I agree that it easier and sufficient for most systems to just be connected over the internet. But health, aviation and critical infrastructure in general should try to be offline as much as possible. Many of the issues described with being offline stem from having many third party dependencies (which typically assume internet access). In general but for critical infrastructure especially you want as little third party dependencies as possible. Sure it's not as easy as saying "we don't want third party dependencies" and all is well. You'll have to make a lot of sacrifices. But you also have a lot to gain when dramatically decreasing complexity, not only from a security standpoint. I really do believe there are many cases where it would be better to use a severely limited tech stack (hardware and software) and use a data diode like approach where necessary.
One of the key headaches mentioned when going offline is TLS. I agree and I think the solution is to not use TLS at all. Using a VPN inside the air-gapped network should be slightly better. It's still a huge headache and you have to get this right, but being connected to the internet at all times is also a HUGE headache.
Does a computer that can access your accounting system need to access the internet? Or email?
A user could run two computers, one that’s for internet stuff, and one that does important internal stuff. But that’s a silly idea because it’s costly.
However, we can achieve the same thing with virtualization, where a user’s web browser is running in a container/VM somewhere and if compromised, goes away.
Stuff like this exists throughout society in general. When should a city employee carry a gun? On one end of the spectrum, the SWAT team probably needs guns. On the other end, the guy who put a note on my door that my fence was leaning into the neighbor’s property didn’t have a gun. So the question is, is a a traffic stop closer to the SWAT team or the guy kindly letting me know I’ve violated a city ordinance?
I don’t know why these things don’t get reassessed. Is it that infrastructure is slower to iterate on? Reworking the company’s network infrastructure, or retraining law enforcement departments, is a big, costly undertaking.
I did find it surprising however that so many systems shown on TV run Windows.
Digital signage screens, shopping registers all sorts of stuff that I assumed would be running Linux.
It is surprising to me that systems with functions like a cash register would be doing automatic updates at all.
The thing the drives me nuts is not even that, which is bad enough, but with the assumption that the Internet connection is always stable and it is legitimate to say "wait until some connections are up" again, as though there are no such things as power outages, network-level errors, cable tears, physical socket failures and such.
Are these people not writing blogs to be read?
And just to be ahead of it, just because you are able to read it doesn't mean it wouldn't be easier and more comfortable to read in a more suitable font.
Whitelist all needed IPs for business functionality, enable the whole Internet once every 3 hours for an hour.
Bonus points if you can do it by network segment.
It would be enough to spare half your computers from the CrowdStrike issue, since I believe the update was pulled after an hour.
Will any-one do this? Probably not. But it is worth entertaining as a possibility between the fully on connectivity and the fully disconnected.
That's non-ironically the problem. Current software culture creates "secure software" with a 200 million line of code attack surface and then act surprised when it blows up spectacularly. We do this because there is effectively no liability for software vendors or for their customers. What software security vendors sell is regulatory compliance, not security.
[...] With the new Find My Device network, you’ll be able to locate your devices even if they’re offline. [...] Devices in the network use Bluetooth to scan for nearby items.
Full email content:
Find My Device network is coming soon
You can use Find My Device today to locate devices when they’re connected to the internet. With the new Find My Device network, you’ll be able to locate your devices even if they’re offline. You can also find any compatible Fast Pair accessories when they’re disconnected from your device. This includes compatible earbuds and headphones, and trackers that you can attach to your wallet, keys, or bike.
To help you find your items when they’re offline, Find My Device will use the network of over a billion devices in the Android community and store your devices’ recent locations.
How it works
Devices in the network use Bluetooth to scan for nearby items. If other devices detect your items, they’ll securely send the locations where the items were detected to Find My Device. Your Android devices will do the same to help others find their offline items when detected nearby.
Your devices’ locations will be encrypted using the PIN, pattern, or password for your Android devices. They can only be seen by you and those you share your devices with in Find My Device. They will not be visible to Google or used for other purposes.
You’ll get a confirmation email in 3 days when this feature is turned on for your Android devices. Until then, you can opt out of the network through Find My Device on the web. Your choice will apply to all Android devices linked to [email]. After the feature is on, you can manage device participation anytime through Find My Device settings on the device.
Learn more
A long time ago I worked at a broker trader where all communications, including servers communications, had to go through zscaler as a man in the middle.
What had been routine all of a sudden became impossible.
Turns out that git, apt, pip, cabal and ctan all had different ideas about how easy they should make this. After a month of fighting each of them I gave up. I just downloaded everything from their public ftp repos and build from source over a week. I wish good luck to whoever had to maintain it.
I'm looking at you, Node.js and Firefox.
Node at least supports an environment variable to add certificates to the list of trusted certs, but it's not as simple as an option to use the system store.
"Many computer scientists believe that people who talk about computer autonomy are indulging in a lot of cybernetic hoopla. Most of these skeptics are engineers who work mainly with technical problems in computer hardware and who are preoccupied with the mechanical operations of these machines. Other computer experts seriously doubt that the finer psychic processes of the human mind will ever be brought within the scope of circuitry, but they see autonomy as a prospect and are persuaded that the social impact will be immense.
Up to a point, says Minsky, the impact will be positive. “The machine dehumanized man, but it could rehumanize him.” By automating all routine work and even tedious low-grade thinking, computers could free billions of people to spend most of their time doing pretty much as they d—n please. But such progress could also produce quite different results. “It might happen”, says Herbert Simon, “that the Puritan work ethic would crumble to dust and masses of people would succumb to the diseases of leisure.” An even greater danger may be in man’s increasing and by now irreversible dependency upon the computer
The electronic circuit has already replaced the dynamo at the center of technological civilization. Many US industries and businesses, the telephone and power grids, the airlines and the mail service, the systems for distributing food and, not least, the big government bureaucracies would be instantly disrupted and threatened with complete breakdown if the computers they depend on were disconnected. The disorder in Western Europe and the Soviet Union would be almost as severe. What’s more, our dependency on computers seems certain to increase at a rapid rate. Doctors are already beginning to rely on computer diagnosis and computer-administered postoperative care. Artificial Intelligence experts believe that fiscal planners in both industry and government, caught up in deepening economic complexities, will gradually delegate to computers nearly complete control of the national (and even the global) economy. In the interests of efficiency, cost-cutting and speed of reaction, the Department of Defense may well be forced more and more to surrender human direction of military policies to machines that plan strategy and tactics. In time, say the scientist, diplomats will abdicate judgment to computers that predict, say, Russian policy by analyzing their own simulations of the entire Soviet state and of the personalities—or the computers—in power there. Man, in short, is coming to depend on thinking machines to make decisions that involve his vital interests and even his survival as a species. What guarantee do we base that in making these decisions the machines will always consider our best interests? There is no guarantee unless we provide it, says Minsky, and it will not be easy to provide—after all, man has not been able to guarantee that his own decisions are made in his own best interests. Any supercomputer could be programmed to test important decisions for their value to human beings, but such a computer, being autonomous, could also presumably write a program that countermanded these “ethical” instructions. There need be no question of computer malice here, merely a matter of computer creativity overcoming external restraints."
an open source example: https://blog.openziti.io/no-listening-ports
This was a resource management problem, a process problem.
Meaning: if your process are invalid, you can also fail in off-line scenario. If you do not treat quality control, or tests correctly you gonna have a bad time.
It's basically an admission that the software may be full of vulnerabilities and the only way to protect it is to limit its exposure to the outside world.
The root of the problem is that almost all software is poorly designed and full of unnecessary complexity which leaves room for exploitation. Companies don't have a good model for quality software and don't aim for it as a goal. They just pile on layer upon layer of complexity.
Quality software tends to be minimalistic. The code should be so easy to read that an average hacker could hack it in under an hour if there was an issue with it... But if the code is both simple and there is no vulnerability within it, then you can rest assured that there exist no hackers on the face of the earth who can exploit it in unexpected ways.
The attack surface should be crystal clear.
You don't want to play a game of cat and mouse with hackers because it's only a matter of time before you come across a hacker who can surpass your expectations. Also, it's orders of magnitude more work to create complex secure software than it is to create simple secure software.
The mindset to adopt is that bad code deserves to be hacked. Difficulty involved in pulling off the hack is not a factor. It's a matter of time before hackers can disentangle the complexity.
Related
The IT Industry is a disaster (2018)
The IT industry faces challenges in IoT and software reliability. Concerns include device trustworthiness, complex systems, and security flaws. Criticisms target coding practices, standards organizations, and propose accountability and skill recognition.
Six Dumbest Ideas in Computer Security
In computer security, common misconceptions like "Default Permit," "Enumerating Badness," and "Penetrate and Patch" hinder effective protection. Emphasizing a "Default Deny" policy and proactive security design is crucial.
Never Update Anything
The article critiques frequent software updates, citing time constraints, confusion over update types, and companies pushing paid upgrades. It highlights challenges and issues caused by updates, questioning their necessity.
Dan Geer on CrowdStrike: It Is Time to Act
The article highlights cybersecurity challenges amid global outages, emphasizing the need for integrated security policies, redundancy in systems, and proactive measures to prevent silent failures and vulnerabilities in technology.
Technology's grip on modern life is pushing us down a dimly lit path
A global technology outage caused by a CrowdStrike software update exposed vulnerabilities in interconnected systems, prompting calls for a balance between innovation and security to enhance digital resilience.