July 14th, 2024

A bit more regarding UTM SE on the iPad

The author shares their experience using UTM SE on an M1 iPad Pro for local development, noting limitations and usability for running arm64 binaries. Comparisons with a-Shell and iSH are made, suggesting a small Linux single-board computer as a better option.

Read original articleLink Icon
A bit more regarding UTM SE on the iPad

The article discusses the author's experience using UTM SE on an M1 iPad Pro for local development. Despite limitations like lack of JIT, the author found it usable for running arm64 binaries and performing tasks like building Rust binaries. However, drawbacks include slow performance, issues with copy/paste, and integration challenges. The article compares UTM SE with alternatives like a-Shell and iSH, highlighting their strengths and weaknesses for CLI tasks on an iPad. The author suggests using a small Linux single-board computer as a better option for development on the go due to performance and battery life benefits. Overall, the article provides insights into the usability and limitations of UTM SE and alternative CLI options on iPads for developers.

Link Icon 2 comments
By @derefr - 5 months
> because merely decompressing the packages is very slow

Is this because the guest code is making syscalls that context-switch, and that imposes a lot of interpretation overhead; or is it mostly because the emulator's disk driver maps calls to write to disk X directly to host IO against the file representing disk X, such that O(N) emulated IOPS translate to O(N) host IOPS?

If it's the latter, then this sounds like it'd be the same problem that Docker.app dealt with on macOS: some guest apps do IO in a single-threaded + serially-dependent (write, then read back, then write back, then read back...) manner, and so they degrade basically quadratically when there's any kind of IO-syscall latency. (You can see a similar effect if you run e.g. `apt-get install` within an NFS-mounted chroot.)

Alternately, if the emulator's disk driver mmap(2)s a host file to serve as the emulated disk—then large-batch streaming writes from the guest would become large piles of dirty pages in need of flushing; and large-batch streaming reads would become large piles of page faults. And because the guest workload is opaque to the hypervisor, it wouldn't be able to use madvise(2) or the like to tell the iPadOS kernel how to predict/coalesce these.

If either of these is the problem with UTM SE's emulated-block-device IO perf, I'd be curious how much better emulation of IO-intensive operations would perform if they reimplemented the block-device driver as a userland large-block-size page file — i.e. what an RDBMS calls a buffer pool.

(AFAIK this is already a best-practice for hypervisors that work with expected-high-latency disks. If you tell VMWare ESXi/vSphere to use an iSCSI storage adapter, then that storage adapter is going to do client-side in-memory caching of the iSCSI target at some larger-than-one-disk-sector granularity.)

By @dark-star - 5 months
Man I wish they would explain what UTM SE is in either the article or in any of the linked (other) articles...