August 1st, 2024

Just Disconnect the Internet

The article argues against the belief that computer systems should be completely disconnected from the internet for security, highlighting the impracticality and complexities of modern business interconnectivity and maintenance needs.

Read original articleLink Icon
DisagreementSkepticismFrustration
Just Disconnect the Internet

The article discusses the common belief that computer systems should not be connected to the internet for security and reliability reasons, particularly in the context of a security incident involving a vendor. It argues that while the idea is appealing, it is often impractical in modern business environments where interconnectivity is essential for operations. Most business systems rely on communication with other systems, making complete disconnection unrealistic. The author highlights the complexities of defining what "not connected to the internet" means, as there are various degrees of connectivity that can still pose risks.

The piece emphasizes that many systems require network access for maintenance, updates, and adaptability to changing business needs. It also points out the challenges faced by organizations operating in restricted environments, where the lack of internet connectivity can significantly increase costs and complicate software management. The author shares personal experiences illustrating the difficulties of managing software and updates in offline settings, emphasizing that while individual tasks may not be inherently difficult, the cumulative effect can be overwhelming. Ultimately, the article calls for a more nuanced understanding of internet connectivity in relation to security, rather than blanket statements advocating for disconnection.

AI: What people are saying
The comments reflect a diverse range of opinions on the article's stance regarding internet connectivity for security.
  • Many commenters emphasize the importance of strict security measures, advocating for whitelisting and controlled access to minimize risks.
  • Some argue for the benefits of air-gapped networks, citing examples like Sweden's Sjunet, which enhance security without full disconnection.
  • Concerns are raised about the complexity and poor design of software, suggesting that better coding practices could reduce vulnerabilities.
  • Several commenters highlight the need for a balanced approach, suggesting limited connectivity or scheduled access to mitigate risks while maintaining functionality.
  • There is a consensus that simply disconnecting from the internet is not a viable long-term solution, as it can hinder operational efficiency.
Link Icon 29 comments
By @flumpcakes - 3 months
I work in security/systems/ops/etc. and fundamentally disagree with this premise. I understand the author is saying "it's not that easy" and I agree completely with that, but I don't agree that it means you're doing your job well.

Unfortunately the vast majority of people do their jobs poorly. The entire industry is set-up to support people doing their job poorly and to make doing your job well hard.

If I deploy digital signage the only network access it should have is whitelisted to my servers' IP addresses and it should only accept updates that are signed and connections that have been established with certificate pinning.

This makes it nearly impossible for a remote attacker to mess with it. Look at the security industry that has exploded from the rise of IoT. There's signage out there (replace with any other IoT/SCADA/deployed device) with open ports and default passwords, I guarantee it.

IoT is just a computer, but it's also a computer that you neglect even more than the servers/virtual machines you're already running poorly.

People don't want to accept this, or even might be affronted by this.

There are some places doing things well - but it's the vast minorities of companies out there, because you are not incentivised to do things well.

"Best practises" or following instructions from vendors does not mean you are doing things well. It means you are doing just enough that a vendor can be bothered to support. Which in a lot of cases is unfettered network access.

By @anticristi - 3 months
In Sweden, there is a private network (Sjunet) which is isolated from the Internet. It is used by healthcare providers. Its purpose is to make computers valuable communication devices (I love how the article points this out), but without exposing your hospital IT to the whole Internet. Members of Sjunet are expected to know their networks and keep tight controls on IT.

I guess Sjunet can be seen as an industry-wide air-gapped environment. I'd say it improves security, but at a smaller cost than each organization having its own air-gapped network with a huge allowlist.

By @LeifCarrotson - 3 months
I'm a controls engineer. I've built hundreds of machines, they do have Ethernet cables for fieldbus networks but should never be connected to the Internet.

Every tool and die shop in your neighborhood industrial park contains CNC machines with Ethernet ports that cannot be put on the Internet. Every manufacturing plant with custom equipment, conveyor lines and presses and robots and CNCs and pump stations and on and on, use PLC and HMI systems that speak Ethernet but are not suitable for exposure to the Internet.

The article says:

> In other words, the modern business computer is almost primarily a communications device.

> There are not that many practical line-of-business computer systems that produce value without interconnection with other line-of-business computer systems.

which ignores the entirety of the manufacturing sector as well as the electronic devices produced by that sector. Millions of embedded systems and PLCs produce value all day long by checking once every millisecond whether one or more physical or logical digital inputs have changed state, and if so, changing the state of one or more physical or logical digital outputs.

There's no need for the resistance welder whose castings were built more than a century ago, and whose last update was to receive a PLC and black-and-white screen for recipe configurations in 2003 to be updated with 2024 security systems. You just take your clipboard to it, punch in the targets, and precisely melt some steel.

Typically, you only connect to machines like this by literally picking up your laptop and walking out to the machine with an Ethernet patch cable. If anything beyond that, I expect my customers to put them on a firewalled OT network, or bridge between information technology (IT) and operations technology (OT) with a Tosibox, Ixon, or other SCADA/VPN appliance.

By @tormeh - 3 months
I remain unconvinced that you shouldn't air-gap systems because that means you can't use internet-centric development practices. I find this argument absurd. The systems that should have their ethernet ports epoxyed also should never have been programmed using internet-centric development practices in the first place. Your MRI machine fetches JS dependencies from NPM on boot? Straight to jail. Not metaphorically.
By @forinti - 3 months
After watching a video of a person playing with a MacDonald's kiosk, I started to do the same with equipment I found at different places.

One food court had kiosks with Windows and complete access to the Internet. Somebody could download malware and steal credit card data. Every time I used one, I turned it off or left a message on the screen. Eventually they started running it in kiosk mode.

Another was a parking kiosk. It was never hardened. I guess criminals haven't caught on to this yet.

The third was an interactive display for a brand of beer. This one wasn't going to cause any harm, but I liked to leave Notepad open with "Drink water" on it. Eventually they turned it off. That's one way to fix it, I guess.

By @djha-skin - 3 months
> If you are operating a private network, your internal services probably don't have TLS certificates signed by a popular CA that is in root programs. You will spend many valuable hours of your life trying to remember the default password for the JRE's special private trust store and discovering all of the other things that have special private trust stores, even though your operating system provides a perfectly reasonable trust store that is relatively easy to manage, because of Reasons. You will discover that in some tech stacks this is consistent but in others it depends on what libraries you use.

Oof, I feel this one. I tried to get IntelliJ's JRE trust store to understand that there was a new certificate for zscaler that it had to use and there were two or three different JDKs to choose from, and all of their trust stores were given the new certificate and it still didn't work and we didn't know why.

By @RF_Savage - 3 months
There is also hamnet, which is partly internet routable and partly not on the 44Net IP block.

https://hamnetdb.net/map.cgi

It has interesting limitations due to the amateur radio spectrum used. Including total ban commercial use.

As that is the social contract of the spectrum, you get cheap access to loads of spectrum between 136kHz and 241GHz, but can't make money with it.

By @NoboruWataya - 3 months
It seems fairly obvious that an airline reservation system needs to be connected to a network at least, I haven't heard many people claim they should have been all offline. But for example I heard stories of lathe machines in workshops that were disabled by this. You gotta wonder whether they really needed to be online. (I'm sure there are reasons, but they are reasons that should be weighed against the risks.)

Beyond that there are plenty of even more ridiculous examples of things that are now connected to the internet, like refrigerators, kettles, garage doors etc. (I don't know if many, or any, of these things were affected by the CrowdStrike incident, but if not, it's only a matter of time until the next one.)

As for the claim that non-connected systems are "very, very annoying", my experience as a user is that all security is "very, very annoying". 2FA, mandatory password changing, locked down devices, malware scanners, link sanitisers - some of it is necessary, some of it is bullshit (and I'm not qualified to tell the difference), but all of it is friction.

By @RajT88 - 3 months
My big take-away is not that "all these systems shouldn't be connected to the internet", it's a few other things:

1. These systems shouldn't allow outbound network flows. That will stop all auto-updates, which you can then manage via internal distribution channels.

2. Even without that, you can disable auto-updates on many enterprise software products - Windows notably, but also Crowdstrike itself. I heard about CS customers disabling auto-update and doing manual rollouts who were saved by this practice.

3. Tacking on to number 2 - gradual rollout of updates which you've done some smoke testing on. Just in case. Again - I heard of CS customers who did a gradual rollout, and managed to only have a fraction of their machines impacted.

By @lokimedes - 3 months
That pretty well summed up my time delivering state of the art AI solutions to military customers. 80% of the effort was getting internet-native tooling to work seamlessly in an air-gapped environment.
By @eqqn - 3 months
"Don't worry, the software in question seems to have fallen out of favor and cannot hurt you."

It may not be the software in question, but proprietary snowflake entitlement management software that has a lot of black box and proprietary voodoo, that does not have any disaster recovery capacity and would be considered tech debt a decade ago... Disgracefully came into life in the year 2021. It did not gracefully recover from clownstrike to say the least.

By @tjoff - 3 months
Good article, though I really thought it would be about the other end. You know hacking movies in the 90s(?) where the good guys face a hacker-attack, frantically typing at the keyboard trying to keep the hackers away. It is a losing battle though, but just at the nick of time (the progress bar is at 97%) the hero unplugs the power cord or internet cable.

Or, in the case of crowdstrike. I can imagine support starts to get some calls, at some time you realize that something has gone horribly wrong. An update, maybe not obvious which, is wreaking havoc. How do you stop it? Have you foreseen this scenario and have a simple switch to stop sending updates?

Or, do you cut the internet? Unlike the movies there isn't a single cord to pull, maybe the servers are in a different building or some cloud somewhere. They probably have a CDN, can you pull the files efficiently?

Now maybe by the time they discovered this it was mostly too late, all online systems might already have gotten the latest updates (but even if that is the case, do they know that is the case?).

By @rowbin - 3 months
>> The stronger versions, things from List 1 and List 2, are mostly only seen in defense and intelligence

And I don't think that is enough. I agree that it easier and sufficient for most systems to just be connected over the internet. But health, aviation and critical infrastructure in general should try to be offline as much as possible. Many of the issues described with being offline stem from having many third party dependencies (which typically assume internet access). In general but for critical infrastructure especially you want as little third party dependencies as possible. Sure it's not as easy as saying "we don't want third party dependencies" and all is well. You'll have to make a lot of sacrifices. But you also have a lot to gain when dramatically decreasing complexity, not only from a security standpoint. I really do believe there are many cases where it would be better to use a severely limited tech stack (hardware and software) and use a data diode like approach where necessary.

One of the key headaches mentioned when going offline is TLS. I agree and I think the solution is to not use TLS at all. Using a VPN inside the air-gapped network should be slightly better. It's still a huge headache and you have to get this right, but being connected to the internet at all times is also a HUGE headache.

By @halfcat - 3 months
There are many fundamental assumptions that ought to be challenged like this.

Does a computer that can access your accounting system need to access the internet? Or email?

A user could run two computers, one that’s for internet stuff, and one that does important internal stuff. But that’s a silly idea because it’s costly.

However, we can achieve the same thing with virtualization, where a user’s web browser is running in a container/VM somewhere and if compromised, goes away.

Stuff like this exists throughout society in general. When should a city employee carry a gun? On one end of the spectrum, the SWAT team probably needs guns. On the other end, the guy who put a note on my door that my fence was leaning into the neighbor’s property didn’t have a gun. So the question is, is a a traffic stop closer to the SWAT team or the guy kindly letting me know I’ve violated a city ordinance?

I don’t know why these things don’t get reassessed. Is it that infrastructure is slower to iterate on? Reworking the company’s network infrastructure, or retraining law enforcement departments, is a big, costly undertaking.

By @1970-01-01 - 3 months
One way to see how she is right is by asking how many layers of 'disconnect from the Internet' do you need? Are you expecting a firewall rule of deny all? Closing all ports on the hosts? Ripping away the TCP/IP stack? Where are you expecting the line of success? Remember, all traffic is routable.
By @andrewstuart - 3 months
I don't think systems should not be connected to the Internet.

I did find it surprising however that so many systems shown on TV run Windows.

Digital signage screens, shopping registers all sorts of stuff that I assumed would be running Linux.

It is surprising to me that systems with functions like a cash register would be doing automatic updates at all.

By @einpoklum - 2 months
> Here's the thing: virtually the entire software landscape has been designed with the assumption of internet connectivity.

The thing the drives me nuts is not even that, which is bad enough, but with the assumption that the Internet connection is always stable and it is legitimate to say "wait until some connections are up" again, as though there are no such things as power outages, network-level errors, cable tears, physical socket failures and such.

By @creesch - 3 months
A bit of a tangent to the subject of the blog, but something that has been bugging for a while. What's up with all these blogs that choose fonts that are just not that good for readability? In this case, monospace. It's not code, it is not formatted as code, making it a bad choice for comfortable reading.

Are these people not writing blogs to be read?

And just to be ahead of it, just because you are able to read it doesn't mean it wouldn't be easier and more comfortable to read in a more suitable font.

By @fifteen1506 - 3 months
The author is failing to see a potential solution.

Whitelist all needed IPs for business functionality, enable the whole Internet once every 3 hours for an hour.

Bonus points if you can do it by network segment.

It would be enough to spare half your computers from the CrowdStrike issue, since I believe the update was pulled after an hour.

Will any-one do this? Probably not. But it is worth entertaining as a possibility between the fully on connectivity and the fully disconnected.

By @gizmo - 3 months
> But that just, you know, scratches the surface. You probably develop and deploy software using a half dozen different package managers with varying degrees of accommodation for operating against private, internal repositories.

That's non-ironically the problem. Current software culture creates "secure software" with a 200 million line of code attack surface and then act surprised when it blows up spectacularly. We do this because there is effectively no liability for software vendors or for their customers. What software security vendors sell is regulatory compliance, not security.

By @readyplayernull - 3 months
And I just got this from big bro Google:

[...] With the new Find My Device network, you’ll be able to locate your devices even if they’re offline. [...] Devices in the network use Bluetooth to scan for nearby items.

Full email content:

Find My Device network is coming soon

You can use Find My Device today to locate devices when they’re connected to the internet. With the new Find My Device network, you’ll be able to locate your devices even if they’re offline. You can also find any compatible Fast Pair accessories when they’re disconnected from your device. This includes compatible earbuds and headphones, and trackers that you can attach to your wallet, keys, or bike.

To help you find your items when they’re offline, Find My Device will use the network of over a billion devices in the Android community and store your devices’ recent locations.

How it works

Devices in the network use Bluetooth to scan for nearby items. If other devices detect your items, they’ll securely send the locations where the items were detected to Find My Device. Your Android devices will do the same to help others find their offline items when detected nearby.

Your devices’ locations will be encrypted using the PIN, pattern, or password for your Android devices. They can only be seen by you and those you share your devices with in Find My Device. They will not be visible to Google or used for other purposes.

You’ll get a confirmation email in 3 days when this feature is turned on for your Android devices. Until then, you can opt out of the network through Find My Device on the web. Your choice will apply to all Android devices linked to [email]. After the feature is on, you can manage device participation anytime through Find My Device settings on the device.

Learn more

By @asynchronous - 3 months
Great write up on the issues and challenges with airgapped and entirely internet avoidant systems in the modern software world.
By @llm_trw - 3 months
The description of updates is painfully true.

A long time ago I worked at a broker trader where all communications, including servers communications, had to go through zscaler as a man in the middle.

What had been routine all of a sudden became impossible.

Turns out that git, apt, pip, cabal and ctan all had different ideas about how easy they should make this. After a month of fighting each of them I gave up. I just downloaded everything from their public ftp repos and build from source over a week. I wish good luck to whoever had to maintain it.

By @justin_oaks - 3 months
> Use the system trust store. Please. I am begging you.

I'm looking at you, Node.js and Firefox.

Node at least supports an environment variable to add certificates to the list of trusted certs, but it's not as simple as an option to use the system store.

By @gwern - 3 months
Marvin Minsky in 1970 (54 years ago) on how you can't just "turn off the X" when it is a powerful economically-valuable pervasive computer system:

"Many computer scientists believe that people who talk about computer autonomy are indulging in a lot of cybernetic hoopla. Most of these skeptics are engineers who work mainly with technical problems in computer hardware and who are preoccupied with the mechanical operations of these machines. Other computer experts seriously doubt that the finer psychic processes of the human mind will ever be brought within the scope of circuitry, but they see autonomy as a prospect and are persuaded that the social impact will be immense.

Up to a point, says Minsky, the impact will be positive. “The machine dehumanized man, but it could rehumanize him.” By automating all routine work and even tedious low-grade thinking, computers could free billions of people to spend most of their time doing pretty much as they d—n please. But such progress could also produce quite different results. “It might happen”, says Herbert Simon, “that the Puritan work ethic would crumble to dust and masses of people would succumb to the diseases of leisure.” An even greater danger may be in man’s increasing and by now irreversible dependency upon the computer

The electronic circuit has already replaced the dynamo at the center of technological civilization. Many US industries and businesses, the telephone and power grids, the airlines and the mail service, the systems for distributing food and, not least, the big government bureaucracies would be instantly disrupted and threatened with complete breakdown if the computers they depend on were disconnected. The disorder in Western Europe and the Soviet Union would be almost as severe. What’s more, our dependency on computers seems certain to increase at a rapid rate. Doctors are already beginning to rely on computer diagnosis and computer-administered postoperative care. Artificial Intelligence experts believe that fiscal planners in both industry and government, caught up in deepening economic complexities, will gradually delegate to computers nearly complete control of the national (and even the global) economy. In the interests of efficiency, cost-cutting and speed of reaction, the Department of Defense may well be forced more and more to surrender human direction of military policies to machines that plan strategy and tactics. In time, say the scientist, diplomats will abdicate judgment to computers that predict, say, Russian policy by analyzing their own simulations of the entire Soviet state and of the personalities—or the computers—in power there. Man, in short, is coming to depend on thinking machines to make decisions that involve his vital interests and even his survival as a species. What guarantee do we base that in making these decisions the machines will always consider our best interests? There is no guarantee unless we provide it, says Minsky, and it will not be easy to provide—after all, man has not been able to guarantee that his own decisions are made in his own best interests. Any supercomputer could be programmed to test important decisions for their value to human beings, but such a computer, being autonomous, could also presumably write a program that countermanded these “ethical” instructions. There need be no question of computer malice here, merely a matter of computer creativity overcoming external restraints."

By @gz5 - 3 months
we can use the Internet without being used by the Internet.

an open source example: https://blog.openziti.io/no-listening-ports

By @renegat0x0 - 3 months
Connection between clownstrike and cybersecurity is flimsy. This was not an attack.

This was a resource management problem, a process problem.

Meaning: if your process are invalid, you can also fail in off-line scenario. If you do not treat quality control, or tests correctly you gonna have a bad time.

By @jongjong - 3 months
'Disconnect from the internet' is a kind of 'Security through obscurity'; which isn't very good security.

It's basically an admission that the software may be full of vulnerabilities and the only way to protect it is to limit its exposure to the outside world.

The root of the problem is that almost all software is poorly designed and full of unnecessary complexity which leaves room for exploitation. Companies don't have a good model for quality software and don't aim for it as a goal. They just pile on layer upon layer of complexity.

Quality software tends to be minimalistic. The code should be so easy to read that an average hacker could hack it in under an hour if there was an issue with it... But if the code is both simple and there is no vulnerability within it, then you can rest assured that there exist no hackers on the face of the earth who can exploit it in unexpected ways.

The attack surface should be crystal clear.

You don't want to play a game of cat and mouse with hackers because it's only a matter of time before you come across a hacker who can surpass your expectations. Also, it's orders of magnitude more work to create complex secure software than it is to create simple secure software.

The mindset to adopt is that bad code deserves to be hacked. Difficulty involved in pulling off the hack is not a factor. It's a matter of time before hackers can disentangle the complexity.