Four Files in a Folder
If you read Part 1, you already know how I got here. After three rounds of revisions from Steven’s engineer at BT Moto, Hannah Savanna’s tune was sorted, the cold idle was holding, and I had a folder on my laptop that looked like this:
1$ ls -alth
2total 3664
3-rw-r--r-- 1 richard users 935728 May 8 12:57 tune3.fpf
4-rw-r--r-- 1 richard users 935728 May 6 17:47 tune2.fpf
5-rw-r--r-- 1 richard users 935792 Apr 28 09:36 tune1.fpf
6-rw-r--r-- 1 richard users 940720 Apr 27 15:31 stock.fpf
stock.fpf was what came off the bike before any tuning. tune1.fpf was the first attempt — fixed the throttle, introduced RPM hunting. tune2.fpf was a partial fix for the cold idle stall. tune3.fpf got it right.
Four binary files. Four versions of Hannah Savanna’s brain at four points in her tuning history. The software engineer in me wanted to know what changed between them — which bytes corresponded to “open the throttle blade to 100%” versus “raise the cold idle target by 200 RPM,” which regions of the file map to the calibration tables a tuner actually edits.
The security engineer in me ran xxd on stock.fpf first.
It was encrypted.
First Look
The Unix tool xxd takes a binary file and prints it as a hex dump — each line shows a 16-byte chunk as two-digit hex pairs on the left and the ASCII interpretation of those same bytes on the right. It’s the standard way to eyeball what’s actually inside a file when you don’t know its format.
Running xxd tune3.fpf and looking at the start of the file produces a wall of two-digit hex pairs with no obvious structure. No magic bytes at offset zero — no 89 50 4E 47 for PNG, no 7F 45 4C 46 for ELF, no vendor signature, nothing identifiable. Just byte-noise.
That noise is the signature of one of two things: strong encryption, or strong compression. Both produce output that looks uniform because both are trying to remove redundancy from the input — encryption to hide it, compression to discard it. From the first screen of a hex dump alone, you can’t tell which.
But there are other ways to tell. And once you can tell them apart, you can start asking sharper questions. What’s the cipher doing? How is the key managed? And what — if anything — can someone with just the ciphertext alone actually learn?
That’s what the rest of this post is about.
The Structure
The first thing worth doing with an unknown binary is checking whether anything jumps out structurally. So I dumped each of the four files into Python and started looking for matching 16-byte blocks across them — the working theory being that if there were a fixed format header, it would show up as identical bytes at the same offset in every file.
1from collections import Counter
2
3files = ["stock.fpf", "tune1.fpf", "tune2.fpf", "tune3.fpf"]
4data = {f: open(f, "rb").read() for f in files}
5
6block_size = 16
7min_len = min(len(d) for d in data.values())
8
9matching = []
10for i in range(0, min_len - block_size, block_size):
11 blocks = [d[i:i+block_size] for d in data.values()]
12 if all(b == blocks[0] for b in blocks):
13 matching.append(i)
14
15print(f"Blocks matching across ALL 4 files: {len(matching)}")
16print(f"First 20 offsets: {[hex(o) for o in matching[:20]]}")
17print(f"Last 5 offsets: {[hex(o) for o in matching[-5:]]}")
18print()
19intervals = Counter(matching[i+1] - matching[i] for i in range(len(matching)-1))
20print("Interval distribution between matching blocks:")
21for interval, count in sorted(intervals.items()):
22 print(f" {interval} ({hex(interval)}): {count} occurrences")
1$ python3 find_matching_blocks.py
2Blocks matching across ALL 4 files: 120
3First 20 offsets: ['0x80', '0x90', '0xa0', '0xb0', '0xc0', '0xd0', '0xe0', '0xf0', '0x100', '0x110', '0x120', '0x130', '0x140', '0x150', '0x160', '0x170', '0x180', '0x190', '0x1a0', '0x1b0']
4Last 5 offsets: ['0x7b0', '0x7c0', '0x7d0', '0x7e0', '0x7f0']
5
6Interval distribution between matching blocks:
7 16 (0x10): 119 occurrences
It did. Sort of.
A contiguous 1,920-byte region — exactly 120 blocks of 16 bytes — identical across stock, tune1, tune2, and tune3, starting at offset 0x80 and running to 0x7F0. That’s not coincidence. That’s the FPF file format announcing itself: a structural region the encryption doesn’t touch.
I got excited!!
Then I actually printed the bytes.
1$ xxd -s 0x80 -l 1920 tune3.fpf
200000080: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
300000090: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
4000000a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
5000000b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
6000000c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
7... (continues identically for all 1,920 bytes) ...
8000007e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
9000007f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
Zeros. Nineteen hundred and twenty bytes of zeros. The “fixed header region” was reserved space — padding the format leaves empty, possibly for forward-compatibility, possibly some artifact of how the original struct got laid out. Either way, it’s not telling me anything cryptographic. The 120 matching blocks weren’t a structural signature; they were the absence of structure.
So I went back and looked at what was actually different.
1$ stat -c "%n: %s bytes" *.fpf
2stock.fpf: 940720 bytes
3tune1.fpf: 935792 bytes
4tune2.fpf: 935728 bytes
5tune3.fpf: 935728 bytes
Subtract the 0x800-byte header-plus-pad region from each and you get the actual encrypted payload sizes:
| File | Total | Payload (total − 2,048) |
|---|---|---|
stock.fpf | 940,720 | 938,672 |
tune1.fpf | 935,792 | 933,744 |
tune2.fpf | 935,728 | 933,680 |
tune3.fpf | 935,728 | 933,680 |
That’s the structure. Three regions, every file:
0x00–0x7F— 128 bytes that are unique to each file. This is where the real header lives: whatever IV, key material, and metadata the format carries to tell the MyGenius how to decrypt the rest.0x80–0x7FF— 1,920 bytes of all-zero padding. Reserved space.0x800–end — the encrypted payload, the rest of the file.
That third region is what we actually want to understand. The first thing it offers up is its size.
Notice that tune2.fpf and tune3.fpf have exactly the same payload size — 933,680 bytes — while tune1.fpf is 64 bytes larger and stock.fpf is a completely different size again. That’s a clue worth setting aside for later. We’ll come back to it.
For now: a three-region file layout, a small unique header, and roughly 933,000 bytes of payload that look like noise. Looking like noise isn’t the same as being noise, though — JPEGs look like noise to a hex editor and they’re not encrypted. Let’s make “looks like noise” rigorous.
The Entropy Test
“Looks like noise” is impressionistic. The rigorous version is Shannon entropy, which measures, in bits per byte, how much information a sequence of bytes actually carries. The math is short:
1import math
2from collections import Counter
3
4def shannon_entropy(data):
5 counter = Counter(data)
6 length = len(data)
7 return -sum((c/length) * math.log2(c/length) for c in counter.values())
For an 8-bit value (one byte), entropy maxes out at 8.0 bits/byte — every byte value from 0x00 to 0xFF appearing with equal probability. English text averages around 4.5. A JPEG hits about 7.5. Anything pushing 8.0 is byte-uniform: every possible value showing up roughly equally often, with no statistical preference. That’s the fingerprint of either a strong compressor or a strong cipher.
Running the function across each file’s payload (the region from offset 0x800 to end, excluding the structural header and zero pad):
1for fn in ["stock.fpf", "tune1.fpf", "tune2.fpf", "tune3.fpf"]:
2 with open(fn, "rb") as f:
3 payload = f.read()[0x800:]
4 print(f"{fn} payload: {shannon_entropy(payload):.5f} bits/byte")
1$ python3 entropy.py
2stock.fpf payload: 7.99982 bits/byte
3tune1.fpf payload: 7.99979 bits/byte
4tune2.fpf payload: 7.99981 bits/byte
5tune3.fpf payload: 7.99983 bits/byte
That’s flat-up-against-the-ceiling. Across roughly 933,000 bytes per file, you cannot distinguish these from a random number generator on a byte-frequency basis alone.
I almost stopped here. The result was clean and unambiguous and the next test felt like overkill. But “looks random in aggregate” is a different claim from “looks random everywhere” — a high-entropy file with a low-entropy region embedded in it will still average high overall. To rule that out, I ran the same calculation across 4KB blocks across the payload, looking for any block that dipped:
1with open("tune3.fpf", "rb") as f:
2 payload = f.read()[0x800:]
3
4block_size = 4096
5entropies = [
6 shannon_entropy(payload[i:i+block_size])
7 for i in range(0, len(payload) - block_size, block_size)
8]
9
10print(f"tune3.fpf payload, {len(entropies)} non-overlapping 4KB blocks:")
11print(f" min: {min(entropies):.5f}")
12print(f" max: {max(entropies):.5f}")
13print(f" mean: {sum(entropies)/len(entropies):.5f}")
1$ python3 entropy_blocks.py
2tune3.fpf payload, 227 non-overlapping 4KB blocks:
3 min: 7.94180
4 max: 7.96606
5 mean: 7.95475
That looks lower than the full-payload number, but it’s not — it’s exactly where uniform random data should land at this sample size. With only 4,096 samples drawn across 256 possible byte values, you get an average of 16 occurrences per value, and natural statistical fluctuation puts measured entropy slightly below the 8.0 ceiling even for a truly uniform source. The theoretical expected value for a uniform random 4KB sample is around 7.955. Our mean of 7.95475 lands right on it.
What matters is the spread: 7.942 to 7.966, a band 0.024 bits wide, with no outliers anywhere across all 227 blocks. There’s no region of the payload that looks less random than any other. The other three files behave identically.
So whatever’s in there, it’s compressed, encrypted, or both. The entropy test alone can’t tell us which. For that, we need to compare files against each other.
The Differential Analysis
Entropy tells us each file is random-looking on its own. But there’s a famous failure mode of poorly-designed crypto where each ciphertext looks great in isolation and the leak only shows up when you have two of them side by side. WEP famously fell to exactly this kind of inter-file analysis (but with Wi-Fi packets instead of files). So even with one well-behaved file in front of me, the test that actually told me whether this implementation was competent was the test that compared files against each other.
To answer that, a quick detour into how modern ciphers work. When you encrypt something with AES or any other respectable block cipher, you don’t just feed it a key and the plaintext — you also feed it a small, unique value called an IV (initialization vector) or nonce (“number used once”). The IV’s job is to randomize the encryption so that encrypting the same plaintext twice produces two completely different ciphertexts. Without an IV, AES would be deterministic — encrypt the same message twice with the same key and you’d get the same ciphertext both times. That’s bad, because it leaks information: an attacker watching the ciphertext can tell when the same message gets sent again, even without ever breaking the cipher.
With a fresh IV per encryption, the same plaintext under the same key produces ciphertext that looks completely uncorrelated to anything that came before. That’s the property we’re about to test for.
If two files were generated by the same deterministic process — same key, same IV — then identical plaintext regions at the same offset would produce identical ciphertext regions at the same offset. They’d match. If the process is non-deterministic — same key but a unique IV per file — then identical plaintext would produce completely different ciphertext, and the two files would look like independent random sequences when compared byte by byte.
Tune2 and Tune3 are the cleanest test case. They have identical payload sizes (933,680 bytes), which means we can compare them at every byte offset without worrying about alignment. They were also produced from very similar source calibrations — Steven’s engineer made targeted edits to Tune2 to produce Tune3, so a meaningful chunk of the underlying plaintext is almost certainly identical between them.
The comparison:
1with open("tune2.fpf", "rb") as f: p2 = f.read()[0x800:]
2with open("tune3.fpf", "rb") as f: p3 = f.read()[0x800:]
3
4assert len(p2) == len(p3)
5n_blocks = len(p2) // 16
6matching_bytes = sum(1 for i in range(len(p2)) if p2[i] == p3[i])
7matching_blocks = sum(1 for i in range(0, n_blocks * 16, 16)
8 if p2[i:i+16] == p3[i:i+16])
9
10print(f"tune2 vs tune3 payload comparison ({len(p2)} bytes, {n_blocks} blocks):")
11print(f" Matching bytes: {matching_bytes} ({100*matching_bytes/len(p2):.4f}%)")
12print(f" Matching 16-byte blocks: {matching_blocks} ({100*matching_blocks/n_blocks:.4f}%)")
13print(f" Random expected byte rate: {100/256:.4f}%")
1$ python3 differential.py
2tune2 vs tune3 payload comparison (933680 bytes, 58355 blocks):
3 Matching bytes: 3514 (0.3764%)
4 Matching 16-byte blocks: 0 (0.0000%)
5 Random expected byte rate: 0.3906%
That byte match rate of 0.3764% is below the random-expected rate of 0.3906%. Not by much — well within sample variance — but it’s a useful reality check. If there were any deterministic structure linking the two ciphertexts, we’d see the byte match rate climb above random. It doesn’t. And the 16-byte block match rate is exactly zero. Across 58,355 sixteen-byte blocks, not a single one matches at the same offset between Tune2 and Tune3.
To rule out the possibility that Tune2/Tune3 are a fluke, I ran the same comparison across every pair of files:
1from itertools import combinations
2
3files = ["stock.fpf", "tune1.fpf", "tune2.fpf", "tune3.fpf"]
4payloads = {f: open(f, "rb").read()[0x800:] for f in files}
5
6for a, b in combinations(files, 2):
7 pa, pb = payloads[a], payloads[b]
8 n = min(len(pa), len(pb))
9 nb = n // 16
10 mb = sum(1 for i in range(n) if pa[i] == pb[i])
11 mblk = sum(1 for i in range(0, nb * 16, 16) if pa[i:i+16] == pb[i:i+16])
12 print(f"{a} vs {b}: {mb} byte matches ({100*mb/n:.4f}%), {mblk} block matches ({100*mblk/nb:.4f}%)")
1$ python3 pairwise.py
2stock.fpf vs tune1.fpf: 3682 byte matches (0.3943%), 0 block matches (0.0000%)
3stock.fpf vs tune2.fpf: 3611 byte matches (0.3867%), 0 block matches (0.0000%)
4stock.fpf vs tune3.fpf: 3632 byte matches (0.3890%), 0 block matches (0.0000%)
5tune1.fpf vs tune2.fpf: 3694 byte matches (0.3956%), 0 block matches (0.0000%)
6tune1.fpf vs tune3.fpf: 3727 byte matches (0.3992%), 0 block matches (0.0000%)
7tune2.fpf vs tune3.fpf: 3514 byte matches (0.3764%), 0 block matches (0.0000%)
Six pairs. Byte match rates clustered tightly between 0.3764% and 0.3992% — all within noise of the 0.3906% you’d expect from independent random data. Block match rates: zero, every time, across roughly 350,000 total blocks examined.
Two conclusions fall out of this.
First, the encryption uses a per-file IV. If the same key/IV pair were being reused, we’d see structure leak through wherever the underlying plaintext was identical — and given how similar the Tune2 and Tune3 source calibrations have to be, that structure would be measurable. It isn’t. The IV is unique per file.
Second, the payload is “encrypted, not compressed.” Compression is deterministic — gzipping the same input twice gives you the same output. If these were compressed-only files, identical source regions would map to identical compressed regions, and the block match rate would be non-zero somewhere. It isn’t. Whatever’s happening in the payload, it’s keyed.
As for which cipher: AES-256 has been the audited standard for symmetric encryption for two decades, has hardware acceleration on essentially every modern microcontroller, and is the obvious choice for an Italian company selling into the European automotive market where rolling your own cipher would be malpractice. The mode is almost certainly an authenticated mode like GCM, because anything writing to an engine control unit needs integrity protection as well as confidentiality — but the specifics aren’t observable from the ciphertext, and they don’t change the rest of the analysis. What is observable is that the encryption is non-deterministic per-file, which tells us the implementation is using a fresh IV for every output. That’s the right way to use a cipher, and not every commercial product gets it right.
Now the interesting question becomes: keyed how? What kind of key management is sitting behind the 128-byte unique header?
The Key Hierarchy
Everything in this section is hypothesis, not observation — it’s the simplest architecture consistent with what I can see in the files and on the device, but I can’t verify it without seeing Dimsport’s source code. With that caveat: the 128-byte unique-per-file header almost certainly contains a wrapped key, and once you assume that, the architecture explains a lot.
Quick primer. Key wrapping is the practice of encrypting a key with another key. You have a long-lived, high-value “master” key, and you generate short-lived, single-purpose “file” keys whenever you need to encrypt something. Each file key is wrapped — encrypted — with the master key, and then shipped alongside the file it encrypts. The recipient uses the master key (which they already have, securely stored) to unwrap the file key, and then uses the file key to decrypt the file’s actual contents.
This pattern is everywhere in modern cryptography because it has a lot of nice properties. The master key rarely sees plaintext data, which limits its exposure. File keys can be revoked or rotated without touching the master. And — most relevantly here — the master key can be unique per device, which means a file wrapped for one device cannot be unwrapped by any other device.
That’s almost certainly what’s happening in the FPF format. The structure I’d expect, working from the outside in:
- A Dimsport root key, baked into MyGenius firmware at the factory. Never leaves the device.
- A per-device master key, derived from the root key plus device-specific identifiers (serial number, master/client codes, possibly fuse-burned values on the MCU).
- A per-file AES-256 key, generated freshly for every tune and wrapped with the per-device master key. This is what lives in the 128-byte FPF header.
The identifiers driving the per-device derivation are visible from the MyGenius’s own info screens. Mine read:
| Field | Value |
|---|---|
| Device serial | 2507XXXX (redacted) |
| Master Code | C1434 (BT Moto) |
| Client Code | M1434 |
The Master Code uniquely identifies BT Moto as the upstream tuner who licensed this handheld. The Client Code identifies the device itself within BT Moto’s fleet. The serial is what it sounds like. Some combination of those three values, mixed with the firmware-resident root key, almost certainly produces the per-device master that wraps every file key Dimsport’s servers generate for me.
The architectural payoff is significant. A tune file generated for my device is bound to my device cryptographically. If I emailed the .FPF to another Africa Twin owner with their own MyGenius handheld, their device wouldn’t decrypt it — their per-device master key is different, so the wrapped file key in the header would unwrap to garbage. The MyGenius can’t be tricked into flashing a file it doesn’t have the right key for, because the key derivation itself fails first.
And that, finally, explains the plot point from Part 1.
When I clicked the wrong button on day one and locked myself out of the handheld with Device locked. Waiting for the brand change authorization from the master, I assumed it was a software EULA — some licensing flag flipped in a registry somewhere that Dimsport’s customer support could un-flip on request. It isn’t. The “brand change authorization” is the act of cryptographically rebinding the device’s master key from one tuner’s Master Code to another’s. That can only happen with Dimsport’s involvement because the new master key has to be generated by Dimsport’s servers using Dimsport’s root secret. BT Moto’s “Master” status in the architecture isn’t bureaucratic; it’s a key derivation input.
Which is to say: when Steven approved my reset the next morning, what actually happened wasn’t a database entry getting updated. Somewhere in Italy the cloud, Dimsport’s server generated a new per-device master key for serial 2507XXXX, signed an authorization token tying it to BT Moto’s Master Code, and pushed both down to my handheld. The handheld accepted the new key, the lockout state cleared, and I went on with my life.
Part 1’s plot point makes more sense now. The brand change lock isn’t a paperwork problem. It’s a cryptographic gate.
Two Layers of Security
Deep breath. Still here? OK, let’s take five minutes and go touch grass. I’ll be here when you get back…
The encryption story I just walked through covers confidentiality: keeping the contents of a tune file unreadable to anyone without the right key. But confidentiality is only half of what a system like this needs. There’s also authorization — the question of which devices are allowed to do which things in the first place. Encryption doesn’t help you there. A perfectly encrypted tune file is still useless if the device receiving it isn’t licensed to flash that vehicle.
This second layer is observable in the MyGenius’s own UI. Open the device info screens and you’ll see two fields with no obvious connection to tune files:
| Field | Value |
|---|---|
| ABL ver | 5.005 |
| ABL Num | 5XXXX |
It also has its own file extension — .abl — visible in older Dimsport documentation. Tutorials from a decade ago describe customers having to manually request an .abl file from their tuner, email back their device serial, and import the file through MyGenius Client before any vehicle work could happen. My experience was different. The MyGenius I received in 2026 came in a Dimsport box but with BT Moto branding on the unit itself, and arrived already licensed to BT Moto’s Master Code — no manual ABL import required, and as far as I can tell, no way to perform one in the current client software anyway. The activation appears to have moved server-side (and possibly factory-side, with units being pre-authorized before they ship from BT Moto). But the ABL versioning is still visible on the device info screen, so the authorization layer is clearly still there doing its job. The customer workflow has been smoothed; the underlying architecture has not.
ABL almost certainly stands for Abilitazione — Italian for authorization or enablement. Dimsport is Italian, so the abbreviations on the device firmware come out in their native language. The version number suggests the ABL spec itself has gone through five major revisions, and the ABL Num is the specific license issued to this device.
The ABL system is conceptually different from the encryption layer in two important ways — though I want to be upfront that the architectural details that follow are how I’d build this kind of system, not what I’ve directly observed inside Dimsport’s firmware.
It controls capability, not content. The encryption layer asks “can this device read this file?” The ABL system asks “is this device licensed to perform this operation on this vehicle?” Those are independent questions. A handheld ABL-licensed to flash Ducatis could in theory receive and decrypt an Africa Twin tune file — and then refuse to actually flash it, because the device isn’t authorized for that protocol. Confidentiality and authorization, separated.
The enforcement mechanism is also probably different. Encryption is enforced cryptographically — you can’t decrypt without the key, whether you want to or not. Authorization is enforced by the device choosing to check and refuse — a software property that depends on the firmware faithfully implementing the policy. If I were designing this system, I’d back the ABL layer with signed authorization tokens from Dimsport’s server: capability certificates rather than symmetric keys. The device verifies a signature, looks up the permitted operations, and gates its own behavior. Whether that’s what Dimsport actually did is a question I can’t answer from the outside, but it’s the standard pattern for this class of problem.
What I can say with confidence is that whatever’s happening under the hood, the ABL system is real, it’s separate from the tune-file encryption, and it has its own update/distribution flow. That much is observable. The rest is informed speculation.
Assuming the architecture is roughly what I’d build, the two systems protect against different failure modes:
- If the file encryption were broken — an attacker pulls the master key out of a decapped MyGenius and starts decrypting tune files — the ABL layer would still prevent that attacker from getting an unlicensed device to flash anything. They’d have plaintext tune files and no way to use them.
- If the ABL layer were broken — an attacker forges authorization tokens and unlocks every vehicle protocol on a handheld — the encryption layer would still prevent that attacker from generating valid tune files for those protocols. They’d have an over-licensed device and no files to flash.
To defeat the whole system, you have to defeat both layers independently. That’s defense in depth, and it’s the reason commercial products like this survive in regulated markets where the cost of a single-layer breach can be catastrophic.
And here is exactly where the Part 1 callback is hiding. The brand change lock — the thing that bricked my brand-new handheld on day one — sits at the intersection of both layers. The cryptographic rebinding I described in the previous section changes the per-device master key (encryption layer). But the authorization to do that rebinding is mediated by an ABL operation (authorization layer). When Steven approved my reset, he wasn’t just unwrapping a key. He was issuing a new capability certificate that authorized Dimsport’s server to generate a new master key, and authorized my device to accept it. Two layers, one workflow, no self-service path that bypasses either.
I started this analysis expecting to find that an automotive tuning vendor had implemented “good enough” crypto and moved on. What I actually found, working from ciphertext alone, was a system where the cryptographic primitives are correct and the architecture around them is thoughtful. That’s worth saying out loud. Dimsport built this carefully.
The Threat Model
Everything above is a forensic analysis — working backward from observable artifacts to a hypothesis about how the system is built. That’s a very different exercise from a security analysis, which would ask what an attacker could actually do. I haven’t done the second exercise, and I want to be explicit about why.
A real security analysis would require things I don’t have (and don’t want):
- The MyGenius firmware, dumped from flash memory, to confirm cipher and key derivation rather than inferring them.
- Live debugging access to a running handheld to watch the decryption process in action.
- Hardware-level access to the microcontroller itself — JTAG, side-channel power analysis, possibly decapping the chip.
- A meaningful budget for legal exposure, because all of the above sits squarely in DMCA anti-circumvention territory.
If you want to do this kind of work without the legal exposure, by the way, there are companies that will pay you well to do it. In fact, well enough to have your own fleet of motorcycles to blog about. Some of them are even hiring. I mention this gently. Moving on.
My goal was never to break Dimsport’s security; it was to understand the shape of the system from the artifacts a normal customer ends up with. From that vantage point: against a passive attacker who intercepts .FPF files, the system is well-defended. Against a casual reverse engineer with a Python interpreter, the system is well-defended — that’s exactly what I am, and the most I’ve managed is to sketch the architecture. Against a determined attacker with hardware skills and physical access, every commercial product is eventually defeatable; this is the textbook evil maid scenario, not a Dimsport-specific weakness. The right question isn’t can the system be broken, but does it raise the cost of attack high enough to deter the attacks that actually matter to the business? For Dimsport, clearly yes. I’m exhibit A.
What I Learned
Forensic analysis from ciphertext alone is genuinely informative. A single ciphertext is a brick wall. Four ciphertexts encrypted under the same scheme are a small dataset, and small datasets give up structural information whether the designer wants them to or not. Anyone holding a folder of related encrypted files from any system is sitting on more information than they probably realize.
Always print the bytes. The most important moment in the analysis was the ninety seconds between “I found a fixed header region” and “those are all zeros.” If I hadn’t bothered with the hex dump, I’d have written a section about the structural signature of the FPF format that didn’t exist. Statistical results are how you find things to look at. The bytes are how you tell whether what you found is real.
The result you want is the boring result. I’ll be honest — when I started running the analysis, some part of me was hoping the numbers would be weird. A poorly-seeded IV, a detectable structural pattern, a cipher mode misconfigured in some interesting way. That’s the kind of finding that turns into a conference talk. Instead I got entropy at the ceiling, zero block matches, and a clean per-file IV. The “what I found” section of this post is mostly the absence of things to report. That’s how it’s supposed to go. Crypto that produces interesting byte-level statistics is crypto that’s broken.
Architecture telegraphs more than algorithms do. The single most informative artifact in this whole exercise wasn’t the entropy histogram. It was the Master/Client lockout error message from Part 1.
Device locked. Waiting for the brand change authorization from the mastertold me there was a key derivation hierarchy gated by a server-side authorization process before I’d opened a single hex editor.AI as a security research partner has the same shape as AI as an engineering research partner. In Part 1, Claude helped me research the O2 sensor disconnect, structure the diagnostic email to Steven, and stay rigorous about not changing two variables at once during the cold-start tests. Here, it helped me write the Python, sanity-check the math, and push back when I was overconfident about things I couldn’t actually verify. Different domain, same shape of contribution. The AI is fast at researching, structuring, verifying, and challenging. The judgment calls are still mine (for now).
The whole reason I went down this rabbit hole is that the MyGenius UI let me lock myself out on day one. If Forced Reset / Brand Change hadn’t been one click away from a routine firmware update in poorly translated Italian software, I’d never have opened that support ticket, never have learned the word abilitazione, and never have been curious enough to xxd a tune file. I expected to find cryptography that was sloppy — “good enough” automotive vendor work, the kind you find behind a clunky desktop client and assume nobody serious ever looked at. What I actually found was the opposite. Audited primitives, per-file IVs, per-device key derivation, and a layered authorization system underneath. Strong-crypto teams and great-UX teams are rarely the same team, and Dimsport is clearly the former. But credit where it’s due: they got the part that mattered right.
To be continued…
