Unfashionably secure: why we use isolated VMs
Thinkst Canary's security architecture uses isolated virtual machines for each customer, enhancing data security and compliance while incurring higher operational costs and requiring strong configuration management skills.
Read original articleThinkst Canary employs a unique security architecture that emphasizes complete customer isolation through the use of isolated virtual machines (VMs). Unlike many cloud-managed services that utilize multi-tenant environments, Canary assigns each customer their own Console, ensuring that data remains separate and secure. This design choice mitigates risks associated with unauthorized access and data breaches, which are common in shared environments. The architecture consists of various services, all contained within individual AWS EC2 instances for each customer, which simplifies monitoring and performance assessment.
While this approach may lack the trendy appeal of modern cloud-native technologies, it offers significant security benefits. The reliance on AWS's hypervisor provides a robust security boundary, limiting the impact of potential vulnerabilities. Additionally, operational issues are confined to individual customers, enhancing reliability and compliance with regulatory requirements. The isolated VM model also facilitates easier geographic data management and staged rollouts of new features.
However, this architecture incurs higher operational costs and demands strong configuration management skills, as maintaining thousands of instances can be complex. Custom monitoring solutions are necessary to ensure the health of these instances, as existing AWS tools may not meet all requirements. Despite these challenges, the benefits of enhanced security and customer-focused service delivery make the isolated VM approach a strategic choice for Thinkst Canary.
Related
Are rainy days ahead for cloud computing?
Some companies are moving away from cloud computing due to cost concerns. Cloud repatriation trend emerges citing security, costs, and performance issues. Debate continues on cloud's suitability, despite its industry significance.
Are rainy days ahead for cloud computing?
Some companies are moving away from cloud computing due to cost and other concerns. 37signals saved $1m by hosting data in a shared center. Businesses are reevaluating cloud strategies for cost-effective solutions.
From Cloud Chaos to FreeBSD Efficiency
A client shifted from expensive Kubernetes setups on AWS and GCP to cost-effective FreeBSD jails and VMs, improving control, cost savings, and performance. Real-world tests favored FreeBSD over cloud solutions, emphasizing efficient resource management.
A hard look at AWS GuardDuty shortcomings
AWS GuardDuty has limitations in coverage, cost, and efficacy, leading to missed threats and high noise levels. Canary Infrastructure is suggested as a complementary, cost-effective solution for enhanced threat detection.
- There is a divide between proponents of VMs for security and those who advocate for containerization, with some arguing that VMs are overused and inefficient.
- Concerns about the operational costs and resource management of using VMs versus containers are prevalent, with some suggesting alternatives like Kubernetes namespaces for customer isolation.
- Many commenters emphasize the importance of data security and isolation, questioning the effectiveness of current multi-tenant architectures.
- Some participants highlight the need for better orchestration and management tools for VMs, particularly in open-source environments.
- There is a general consensus that while VMs provide strong isolation, they come with trade-offs in terms of resource efficiency and operational complexity.
Modern "containers" were invented to make things more reproducible ( check ) and simplify dev and deployments ( NOT check ).
Personally FreeBSD Jails / Solaris Zones are the thing I like to dream are pretty much as secure as a VM and a perfect fit for a sane dev and ops workflow, I didn't dig too deep into this is practice, maybe I'm afraid to learn the contrary, but I hope not.
Either way Docker is "fine" but WAY overused and overrated IMO.
HTTPS is not allowed (locked down for security!), so communication is smuggled over DNS? uhh ... I suspect that a lot of what the customer "security" departments do, doesn't really make sense ...
A short survey on this stuff:
A Bromium demo circa 2014 was a web browser where every tab was an isolated VM, and every HTTP request was an isolated VM. Hundreds of VMs could be launched in a couple of hundred milliseconds. Firecracker has some overlap.
> Lastly, this approach is almost certainly more expensive. Our instances sit idle for the most part and we pay EC2 a pretty penny for the privilege.
With many near-idle server VMs running identical code for each customer, there may be an opportunity to use copy-on-memory-write VMs with fast restore of unique memory state, using the techniques employed in live migration.
Xen/uXen/AX: https://www.platformsecuritysummit.com/2018/speaker/pratt/
As more people wake up to the realization that we shouldn't trust code, I expect that the number of civilization wide outages will decrease.
Working in the cloud, they're not going to be able to use my other favorite security tool, the data diode. Which can positively guarantee ingress of control, while still allowing egress of reporting data.
i'm not anti-VM, they're great technology, i just don't think it should be the only way to get protection. VMs are incredibly inefficient... what's that you say, they're not? ok, then why aren't they integrated into protected mode OSes so that they will actually be protected?
The author did acknowledge it’s a trade off, but the economics of this trade off may or may not make sense depending on how much you need to charge your customers to remain competitive with competing offerings.
- From a Docker/Moby Maintainer
Each customer gets their own namespace and a namespace is locked down in terms of networking and I deploy Postgres in each namespace using the Postgres operator.
I've built an operator for my app, so deploying the app into a namespace is as simple as deploying the manifest.
This point is made in the context of VM bits, but that switching cost could (in theory, haven't done it myself) be mitigated using, e.g. Terraform.
The brace-for-shock barrier at the enterprise level is going to be exfiltrating all of that valuable data. Bezos is running a Hotel California for that data: "You can checkout any time you like, but you can never leave" (easily).
What I would like to see, would be more App virtualization software which isolates the app from the underlying OS enough to provide an safe enough cage for the app.
I know there are some commercial offerings out there (and a free one), but maybe someone can chime in has some opinions about them or know some additional ones?
Last time I looked into this for on-prem the solutions seemed very enterprise, pay the big bux, focused. Not a lot in the OSS space. What do people use for on-prem VM orchestration that is OSS?
Would give you very nearly as good isolation for much lower cost.
maybe someday that market will boom a bit more, so we can run hypervisors with vms in there that host single application kind of things. like a BSD kernel that runs postgres as its init process or something. (i know thats oversimplified probarbly ::P).
there's a lot of room in the VM space for improvement ,but pretty much all of it is impossible if you need to load an entire OS multi-purpose-multi-user into the vm.....
Until then the debate between VM and Containerisation will continue
I'm not sure why the author doesn't understand that he could have his cake and eat it too.
There has got to be a better middle ground. Like mult tenant but strong splits ( each customer on db etc )
With virtualization the attack surface is narrowed to pretty much just the virtualization interface.
The problem with current virtualization (or more specifically, the VMM's) is that it can be cumbersome, for example memory management is a serious annoyance. The kernel is built to hog memory for cache and etc. but you don't want the guest to be doing that - since you want to overcommit memory as guests will rarely use 100% of what is given to them (especially when the guest is just a jailed singular application), workarounds such as free page reporting and drop_caches hacks exist.
I would expect eventually to see high performance custom kernels for a application jails - for example: gVisor[1] acts as a syscall interceptor (and can use KVM too!) and a custom kernel. Or a modified linux kernel with patched pain points for the guest.
In effect what virtualization achieves is the ability to rollback much of the advantage of having an operating system in the first place in exchange for securely isolating the workload. But because the workload expects an underlying operating system to serve it, one has to be provided to it. So now you have a host operating system and a guest operating system and some narrow interface between the two to not be a complete clown show. As you grow the interface to properly slave the guest to the host to reduce resource consumption and gain more control you will eventually end up reimagining the operating system perhaps? Or come full circle to the BSD jail idea - imagine the host kernel having hooks into every guest kernel syscall, is this not a BSD jail with extra steps?
[1] <https://gvisor.dev/>
This can be boiled down to "we use AWS' built-in security, not our own". Using EC2 instances is then nothing but a choice. You could do the exact same thing with containers (with fargate, perhaps ?) : one container per tenant, no relations between containers => same things (but cheaper).
This made me laugh for some reason
I had my popcorn right? What is the complain here?
I network comes done, stores will have no choice but to hand the food for free.
I am currently not trouble shooting my solutions. I am trouble shooting the VM.
Related
Are rainy days ahead for cloud computing?
Some companies are moving away from cloud computing due to cost concerns. Cloud repatriation trend emerges citing security, costs, and performance issues. Debate continues on cloud's suitability, despite its industry significance.
Are rainy days ahead for cloud computing?
Some companies are moving away from cloud computing due to cost and other concerns. 37signals saved $1m by hosting data in a shared center. Businesses are reevaluating cloud strategies for cost-effective solutions.
From Cloud Chaos to FreeBSD Efficiency
A client shifted from expensive Kubernetes setups on AWS and GCP to cost-effective FreeBSD jails and VMs, improving control, cost savings, and performance. Real-world tests favored FreeBSD over cloud solutions, emphasizing efficient resource management.
A hard look at AWS GuardDuty shortcomings
AWS GuardDuty has limitations in coverage, cost, and efficacy, leading to missed threats and high noise levels. Canary Infrastructure is suggested as a complementary, cost-effective solution for enhanced threat detection.