blog.kdgregory.com

Thursday, October 9, 2025

Things I hate about the Mazda CX-70

It's been a little over a year since I bought a CX-70. Overall, I really like it: it's comfortable, handles well, looks great inside and out, has enough cargo space for my needs, and is able to pull my 3500 pound travel trailer with ease. But there are a few things that a really don't like, to the extent that Mazda might not be on my list the next time I look for a car.

I've listed them here in increasing order of annoyance.

Turning off the stereo

There have been a lot of professional reviewers slamming the infotainment unit. I'm going to focus on one very small part of it, that nonetheless makes me grumble every time I use it: turning the stereo off so that it stays off.

The Mazda has three ways to turn off the stereo. The first is the mute button: press the small round knob in the center console and the sound stops. But if you turn off the engine it's back on as soon as you start up again. If you hold down that button, you turn off the infotainment system entirely. It won't turn on again the next time you get in the car, but you also can't use the navigation system or even see the current time.

If you actually want to turn off just the stereo, you must press the large button, to bring up a menu. Then select “Audio Sources” from the that menu. Then scroll to the very end of that list where there's an “Audio Off” menu item. Finally, press the large button to select that item.

As I see it, there are two easy solutions: either remember the setting of the mute, or move “Audio Off” to the very top of the audio menu.

“Upcoming exits” navigation sub-panel

One of the reasons that I bought the CX-70 was that it had a built-in navigation system. And while “Miss Google” is easier to use and gives better directions, I've been in many places without Internet connectivity (and where I might not have already downloaded maps). I also like the navigation screen as a default display.

But then I get on a highway, and the system shows a list of upcoming exits with their distance and estimated ETA, blocking the right third of the screen. In principle, I might like such a list, if it showed the exit names and I could scroll through them to highlight the exit I wanted. But it doesn't, and I can't, and more importantly, I can't make it go away. The UI indicates that I should be able to do that: there's a downward pointing arrow at the top of the widget, which I would interpret as “minimize this window,” but I have not found any way to activate that control (if control it is).

If you go into the navigation menus, there's an sub-menu for what appears on the sidebar. I've told it to show nothing, but that doesn't disable the exit list. Clearly the team that put that checkbox in place never talked to the team that implemented the exit sidebar.

Repeated warning messages

When the CX-70 wants to tell you something, it blanks out most of the instrument console to do so. And requires you to press the "Info" key on the steering wheel to make the message go away.

Sometimes, this is annoying, as when it tells me that my windshield washer fluid is low (especially annoying because that message appears when the fluid is half empty, due to the design of the reservoir). And sometimes it's downright dangerous, such as when it told me — in separate messages — that my front collision avoidance system was no longer operational because I was driving in a torrential downpour. Seeing your gauges replaced by a big warning box is guaranteed to attract your attention, even if you have none to spare.

What raises this behavior to major annoyance is that the conditions that trigger the message reset: in the case of the torrential downpour, once I got out of a rain band the sensors would detect that they could see what was in front of me, and the collision avoidance system would be re-enabled. Only to be disabled again, with two messages, when I entered the next rain band.

Aside from the danger and annoyance, repeated messages desensitive users. After passing through two or three rain bands that day, I forced myself to ignore the messages flashing on my screen. Which, of course, meant that I wouldn't see a truly important message, such as dangerously low tire or oil pressure.

My recommendation here is simple: only show a message once per drive. Assume that I know there's a problem until I turn off the engine. Unfortunately, I think this is unlikely to happen, for the same reason that the CX-70's Owner's Manual prefaces every feature description with a warning that in effect says “don't rely on this feature.”

Rear auto-braking due to bicycle rack

A friend of mine has a Jeep Wrangler. When he shifts into reverse with a hitch-mounted bicycle rack, the car starts beeping. The CX-70 goes one step further: it slams on the brakes. I first discovered this “feature” on our first long trip, trying to turn around in a narrow dead-end street. Even if you're moving at a slow walking speed, suddenly slamming on the brakes will bounce your head off the seatback (try it!). When that happens every few feet, it's infuriating.

There is a way to turn this feature off, buried in the menu system. But it is a useful feature, so I don't want to turn it off permanently. This would be a perfect place for the warning system I described in the last section: apply the brakes (once), display a message, and then ignore the situation once the driver acknowledges the message (this is one case where the acknowledgment should not last for the entire drive).

I ended up solving the problem with a hack. We have the factory installed trailer hitch, which disables the rear warning systems when you plug in a trailer's umbilical. So I bought a set of magnetic towing lights (used when you're towing another car or a utility trailer that doesn't have lights), and plug them in whenever we have the bicycle rack installed.

Driver personalization system

Everything up to this point has been an annoyance that, on balance, I can live with This “feature” is the reason that I don't think that I'll buy another Mazda.

Like many upscale cars, the CX-70 can save your preferences for seat, steering wheel, and mirror position. There are two buttons on the dash; one for myself, the other for my wife. And if that was all the CX-70 provided, I would be happy. But Mazda decided to add the “Driver Personalization System,” which uses facial recognition to figure out who is driving the car.

It's cool technology, right? What could go wrong? The answer: a lot.

First off, it takes a subjectively long time to figure out who the driver is in the best of cases. Objectively, I've timed it as over 10 seconds in the worst case. It doesn't seem to work at all if you have the camera display turned on. And if you shift into gear without waiting for it to finish, it gives up.

Second, it's not very good: at least half the time it doesn't figure out that it's me, even though I've “trained” it with multiple postures, with and without sunglasses. To get a good reading, you must stare at the infotainment screen and remain rigidly in place, like you're posing for an 1800s photo. Even then, the failure rate is pretty high.

Third, it appears to operate asynchronously. Not a problem, except when the ”don't look at the infotainment unit while driving” warning pops up (something that happens for every drive). If you acknowledge that warning, then the DPS seems to give up (or maybe it interprets the button press as accepting whatever driver it thinks you are).

None of this would be intolerable, if it simply defaulted to the previous driver's settings. But for some reason, Mazda decided to default to a “guest” setting. The point of this completely eludes me: if you're loaning your car to someone, why would they need their own configuration? And why would Mazda think that the next person to borrow the car would want the same settings?

And that brings me to the real problem: the Driver Personalization System doesn't just adjust the seat, mirrors, and steering wheel. It appears to remember every menu configuration item, from the position of the heads-up display (good) to the selected radio station and audio volume (not so good, although thankfully we don't have a teenager). Over the past year I've tried to make the guest settings the same as my normal settings, and every time I find a new one to change I curse the Mazda designers and developers.

So I have one recommendation: get rid of the friggin' “guest” mode and just default to the last driver.

Wrapping Up

I wrote this post at least partly to vent, and partly in the hope that a Mazda product manager might see it and instigate change (hey, it's worked for my posts critical of AWS services!).

But a bigger reason — and why it's on my programming-focused blog — is that each of these problems is a user interface failure. Likely caused by design and development teams that are working in isolation, trying to tick features off a product manager's list and meet an imposed deadline, without a QA team that acts as the voice of the customer. Fortunately, as software problems all of them can be corrected by that same development team, and installed during a scheduled service.

If enough people show their annoyance.

Saturday, December 14, 2024

Once more into the breach: Amazon EFS performance for software builds

This is the third time that I'm writing about this topic. The first time was in 2018, the second in 2021. In the interim, AWS has announced a steady stream of improvements, most recently (October) increasing read throughput to 60 MB/sec.

I wasn't planning to revisit this topic. However, I read Tim Bray's post on the Bonnie disk benchmark, and it had the comment “it’d be fun to run Bonnie on a sample of EC2 instance types with files on various EBS and EFS and so on configurations.” And after a few exchanges with him, I learned that the Bonnie++ benchmark measured file creation and deletion in addition to IO speed. So here I am.

EFS for Builds

Here's the test environment (my previous posts provide more information):

  • All tests run on an m5d.xlarge instance (4 vCPU, 16 GB RAM), running Amazon Linux 2023 (AMI ami-0453ec754f44f9a4a).
  • I created three users: one using the attached instance store, one using EBS (separate from the root filesystem), and one using EFS. Each user's home directory was on the filesystem in question, so all build-specific IO should be confined to that filesystem type, but they shared the root filesystem for executables and /tmp.
  • The local and EBS filesystems were formatted as ext4.
  • The EBS filesystem used a GP3 volume (so a baseline 3000 IOPS).
  • The EFS filesystem used Console defaults: general purpose, elastic throughput. I mounted it using the AWS recommended settings.
  • As a small project, my AWS appenders library, current (3.2.1) release.
  • As a large project, the AWS Java SDK (v1), tag 1.11.394 (the same that I used for previous posts).
  • The build command: mvn clean compile.
  • For each project/user, I did a pre-build to ensure that the local Maven repository was populated with all necessary dependencies.
  • Between builds I flushed and cleared the filesystem cache; see previous posts for details.
  • I used the time command to get timings; all are formatted minutes:seconds, rounded to the nearest second. “Real” time is the elapsed time of the build; if you're waiting for a build to complete, it's the most important number for you. “User” time is CPU time aggregated across threads; it should be independent of disk technology. And “System” time is that spent in the kernel; I consider it a proxy for how complex the IO implementation is (given that the absolute number of requests should be consistent between filesystems).

And here are the results:

  Appenders AWS SDK
  Real User System Real User System
Instance Store 00:06 00:16 00:01 01:19 02:12 00:09
EBS 00:07 00:16 00:01 01:45 02:19 00:09
EFS 00:18 00:20 00:01 15:59 02:24 00:17

These numbers are almost identical to the numbers from three years ago. EFS has not improved its performance when it comes to software build tasks.

What does Bonnie say?

As I mentioned above, one of the things that prompted me to revisit the topic was learning about Bonnie, specifically, Bonnie++, which performs file-level tests. I want to be clear that I'm not a disk benchmarking expert. If you are, and I've made a mistake in interpreting these results, please let me know.

I spun up a new EC2 instance to run these tests. Bonnie++ is distributed as a source tarball; you have to compile it yourself. Unfortunately, I was getting compiler errors (or maybe warnings) when building on Amazon Linux. Since I no longer have enough C++ knowledge to debug such things, I switched to Ubuntu 24.04 (ami-0e2c8caa4b6378d8c), which has Bonnie++ as a supported package. I kept the same instance type (m5d.xlarge).

I ran with the following parameters:

  • -c 1, which uses a single thread. I also ran with -c 4 and -c 16 but the numbers were not significantly different.
  • -s 32768, to use 32 GB for the IO tests. This is twice the size of the VM's RAM, the test should measure actual filesystem performance and rather than the benefit of the buffer cache.
  • -n 16, to create/read/delete 16,384 small files in the second phase.

Here are the results, with the command-lines that invoked them:

  • Local Instance Store: time bonnie++ -d /mnt/local/ -c 1 -s 32768 -n 16
    Version 2.00a       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
    ip-172-30-1-84  32G  867k  99  128m  13  126m  11 1367k  99  238m  13  4303 121
    Latency              9330us   16707us   38347us    6074us    1302us     935us
    Version 2.00a       ------Sequential Create------ --------Random Create--------
    ip-172-30-1-84      -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++     0  99 +++++ +++ +++++ +++
    Latency               146us     298us     998us    1857us      18us     811us
    1.98,2.00a,ip-172-30-1-84,1,1733699509,32G,,8192,5,867,99,130642,13,128610,11,1367,99,244132,13,4303,121,16,,,,,+++++,+++,+++++,+++,+++++,+++,4416,99,+++++,+++,+++++,+++,9330us,16707us,38347us,6074us,1302us,935us,146us,298us,998us,1857us,18us,811us
    
    real	11m10.129s
    user	0m11.579s
    sys	1m24.294s
         
  • EBS: time bonnie++ -d /mnt/ebs/ -c 1 -s 32768 -n 16
    Version 2.00a       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
    ip-172-30-1-84  32G 1131k  99  125m   8 65.4m   5 1387k  99  138m   7  3111  91
    Latency              7118us   62128us   80278us   12380us   16517us    6303us
    Version 2.00a       ------Sequential Create------ --------Random Create--------
    ip-172-30-1-84      -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    Latency               218us     303us     743us      69us      15us    1047us
    1.98,2.00a,ip-172-30-1-84,1,1733695252,32G,,8192,5,1131,99,128096,8,66973,5,1387,99,140828,7,3111,91,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,7118us,62128us,80278us,12380us,16517us,6303us,218us,303us,743us,69us,15us,1047us
    
    real	16m52.893s
    user	0m12.507s
    sys	1m4.045s
         
  • EFS: time bonnie++ -d /mnt/efs/ -c 1 -s 32768 -n 16
    Version 2.00a       ------Sequential Output------ --Sequential Input- --Random-
                        -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Name:Size etc        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
    ip-172-30-1-84  32G  928k  98  397m  27 60.6m   6  730k  99 63.9m   4  1578  16
    Latency              8633us   14621us   50626us    1893ms   59327us   34059us
    Version 2.00a       ------Sequential Create------ --------Random Create--------
    ip-172-30-1-84      -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16     0   0 +++++ +++     0   0     0   0     0   1     0   0
    Latency             22516us      18us     367ms   24473us    6247us    1992ms
    1.98,2.00a,ip-172-30-1-84,1,1733688528,32G,,8192,5,928,98,406639,27,62097,6,730,99,65441,4,1578,16,16,,,,,218,0,+++++,+++,285,0,217,0,944,1,280,0,8633us,14621us,50626us,1893ms,59327us,34059us,22516us,18us,367ms,24473us,6247us,1992ms
    
    real	23m56.715s
    user	0m11.690s
    sys	1m18.469s
         

For the first part, reading large block files, I'm going to focus on the “Rewrite” statistic: the program reads a block from the already created file, makes a change, and writes it back out. For this test, local instance store managed 126 MB/sec, EBS was 65.4 MB/sec, and EFS was 60.6 MB/sec. Nothing surprising there: EFS achieved its recently-announced throughput, and a locally-attached SSD was faster than EBS (although much slower than the 443 MB/sec from my five-year-old laptop, a reminder that EC2 provides fractional access to physical hardware).

The second section was what I was interested in, and unfortunately, the results don't give much insight. In some doc I read that "+++++" in the output signifies that the results aren't statistically relevant (can't find that link now). Perhaps that's due to Bonnie++ dating to the days of single mechanical disks, and modern storage systems are all too fast?

But one number that jumped out at me was “Latency” for file creates: 146us for instance store, 218us for EBS, but a whopping 22516us for EFS. I couldn't find documentation for this value anywhere; reading the code, it appears to measure the longest time for a single operation. Which means that EFS could have 99% of requests completing in under 100ms but a few outliers, or it could mean generally high numbers, of which the one stated here is merely the worst. I suspect it's the latter.

I think, however, that the output from the Linux time command tells the story: each of the runs uses 11-12 seconds of “user” time, and a minute plus of “system” time. But they vary from 11 minutes of “real” time for instance store, up to nearly 24 minutes for EFS. That says to me that EFS has much poorer performance, and since the block IO numbers are consistent, it must be accounted for by the file operations (timestamps on the operation logs would make this a certainty).

Conclusion

So should you avoid EFS for your build systems? Mu.

When I first looked into EFS performance, in 2018, I was driven by my experience setting up a build server. But I haven't done that since then, and can't imagine that too many other people have either. Instead, the development teams that I work with typically use “Build as a Service” tools such as GitHub Actions (or, in some cases, Amazon CodeBuild). Running a self-hosted build server is, in my opinion, a waste of time and money for all but the most esoteric needs.

Wo where does that leave EFS?

I think that EFS is valuable for sharing files — especially large files — when you want or need filesystem semantics rather than the web-service semantics of S3. To put this into concrete terms: you can read a section of an object from S3, but it's much easier codewise to lseek or mmap a file (to be fair, I haven't looked at how well Mountpoint for Amazon S3 handles those operations). And if you need the ability to modify portions of a file, then EFS is the only real choice: to do that with S3 you'd have to rewrite the entire file.

For myself, I haven't found that many use cases where EFS is the clear winner over alternatives. And given that, and the fact that I don't plan to set up another self-hosted build server, this is the last posting that I plan to make on the topic.

Friday, September 6, 2024

My Thoughts on CodeCommit Deprecation

AWS's recent fait accompli deprecation of CodeCommit and six other services was a shock to me.*

Not because CodeCommit was a particularly good product — it wasn't — but because it could have been one with some investment. While there are many Git repository services in the world, CodeCommit was part of the AWS ecosystem, giving it some unique capabilities. Capabilites that AWS never exploited.

CodeCommit was released at re:Invent 2014, and became generally available the following summer. At some point after that, I moved a couple of repositories that I'd been self-hosting. At the time, GitHub didn't support private repositories, and I was happy to pay AWS a few cents a month to ensure that they were available anywhere and safely backed-up.

The first thing I noticed about CodeCommit was that it was painfully slow. To put some numbers on that: an HTTPS clone of my appenders library (~10,000 objects, 2.6 MiB) from GitHub takes 0.917 seconds. From the identical repository on CodeCommit, the time is 7.407 seconds. My website, which has fewer objects but more bytes, takes 22 seconds. While you might not be frequently cloning repositories, pulls and pushes are also much slower than with GitHub.

I was also annoyed that I had to set up an association between my private key and a CodeCommit user: I have several machines, each with their own private key. Making this more annoying, the key associations don't provide any indication of which key they represent; it would have been nice to see the associated public key, or even a key fingerprint.

But as I learned later, I didn't need to make those associations. CodeCommit has a git credential helper that generates temporary credentials based on IAM identity. It's not terribly well documented, and an experienced Git user probably wouldn't look at the documentation anyway (at least, I didn't), especially since the “helpful hints” shown after creating a repository make no mention of it.

Highlighting this one feature would have made CodeCommit a far more useful tool than it was. For example, a CI/CD pipeline running on AWS infrastructure wouldn't need to store credentials to access the source repository. But there's no suggestion that users could do that, even in AWS's own CodeBuild user guide (which gives examples for GitHub, BitBucket, and GitLab).

Or, as an example closer to my own needs: five years ago I submitted an issue to the “CloudFormation roadmap”, asking for the ability to retrieve templates from arbitrary HTTPS URLs (and, based on the stars and comments, this seems to be a common wish). While I can understand reluctance to allow generic public repository URLs as a security risk (somebody, somewhere, would check in something they shouldn't), CodeCommit and role-based credentials would allow CloudFormation to easily support private templates. But with CodeCommit's deprecation, that's a lost cause.

Historically, AWS has made a big deal of “dog-fooding” their services; I don't think this happened with CodeCommit. If you want to download SDK source, or any other AWS-provided open source, you go to GitHub. Although, given how slow CodeCommit is, maybe we should be thankful; something like the Java SDK already takes an enormously long time to download. But without AWS using CodeCommit internally, there was no pressure to improve. And no serendipitous “hey, we can do X!” moments.

So what? Anybody who signed up for CodeCommit can still use it, right? And perhaps it was just hubris on AWS's part to compete with companies that already produced developer tooling and the “Code” suite should never have been released in the first place.

But for me, it's yet another sign that AWS has moved away from “now go build.” And that was what attracted me to AWS in the first place.


* “Fait accompli” because they did not announce their plans in advance, instead using blog posts like this one, which was released after they'd already blocked new users.