Review and analysis of XTAR 3600 mAh 18650 Li-Ion protected battery

TL;DR – Yes, the XTAR 3600 mAh protected 18650 battery meets its rated capacity! Just make sure your device can drain down to 2.5 volts.

The 18650 lithium-ion battery, named after its size (1.8 cm diameter and 6.50 cm length), has seen continuous improvements in capacity over the years. Nowadays it’s easy to get capacities in excess of 3000 mAh, with the highest capacity cells reaching and even exceeding 3500 mAh.

The XTAR 3600mAh protected 18650 lithium-ion battery, pictured with its original box.

The XTAR 3600 mAh protected 18650 lithium-ion battery, pictured with its original box.

Earlier this year, XTAR, a Chinese company specializing in batteries and battery accessories, announced the newest and highest-capacity addition to their protected 18650 lineup, boasting an impressive 3600 mAh capacity and aimed towards medium-drain applications like power banks and LED flashlights (sorry vapers, this one’s not for you!). After submitting my name for consideration, I was one of a few people selected to receive some samples for review; considering how much I’ve blogged about lithium-ion batteries on here, it only makes sense to talk even more about them!

FULL DISCLOSURE: XTAR provided these batteries to me at no charge for an independent review. They had no editorial control over this review, I have not been compensated monetarily, and all opinions expressed in this review are my own.

Introduction

XTAR’s new 3600 mAh protected 18650 comes in a discreet box, holding nothing more than the battery itself. Like all protected 18650s, the protection comes in the form of a small PCB (printed circuit board) that is stacked on the negative end of the bare 18650 cell, with a metal strap running up the cell’s body to the positive terminal, where a small “button top” is attached to improve electrical connectivity for spring-loaded battery holders.

Positive and negative terminals of the XTAR 3600mAh protected 18650 battery.

Positive and negative terminals of the XTAR 3600 mAh protected 18650 battery. Note the added length due to the button top on the left, and the protection circuit and protective plate on the right.

I was able to request an official datasheet for the battery, which I have included at the end of my blog post. My main goal with this review is to test the batteries from an objective perspective, using dedicated test equipment rather than in-application testing in devices like flashlights. If I do decide to perform such tests, I’ll add a second part to my review (stay tuned!).

Datasheet specifications

Parameter Value
Cell manufacturer (Unspecified)
Nominal voltage 3.6 V
Nominal capacity 3600 mAh
Minimum capacity 3500 mAh
Discharge cutoff voltage 2.5 V
Cycle life @ 80% capacity At least 300 times
(0.5C charge to 4.2 V, 0.03C taper; 0.5C discharge to 2.5 V)
Size 18.1~18.7 mm diameter
68.3~69.3 mm length
Weight ≤50 g
Internal AC resistance ≤45 mΩ
Standard charge CC 720 mA
CV 4.2 V
Taper 50 mA
Fast charge CC 2500 mA (0.7C)
CV 4.2 V
Taper 36 mA (0.01C)
Standard charge voltage 4.2 V
Standard discharge 720 mA CC to 2.5 V
Continuous discharge current >8.5 A
Overvoltage protection Trip 4.25~4.35 V
Recover 4.0~4.2 V
Undervoltage protection Trip 2.4~2.5 V
Recover 2.8~3.0 V
Overcurrent protection Trip 11~13 A
Working temperature Charging: 0~45 °C
Discharging -20~60 °C

Test 1: EBL TC-X Pro battery analyzer

Initial testing of the batteries was tested using my EBL TC-X Pro 4-bay battery analyzer. The test procedure for the batteries was performed as follows:

  1. Record out-of-box/initial voltage before charging
  2. Run automatic capacity test with programmed 1500 mA charge current to 4.2 V, and fixed 500 mA discharge to 2.75 V (charge -> discharge -> charge); record discharge capacity and reported internal resistance
  3. Run manual discharge test with fixed 500 mA current to 2.75 V; record discharge capacity and reported internal resistance
  4. Run manual charge with 1500 mA charge current to 3.65V/LiFePO4 mode (moderate voltage is best for long-term storage); record charge capacity
Parameter (charge current = 1.5A) Cell 1 (bay 1) Cell 2 (bay 2)
Initial voltage (V) 3.65 3.55
Capacity run 1 (mAh @ 2.75 V end of discharge) 3349 3369
Capacity run 2 (mAh @ 2.75 V end of discharge) 3334 3386
Internal resistance run 1 (mΩ) 54.9 46.9
Internal resistance run 2 (mΩ) 53.4 45.1
Storage capacity (mAh charged to 3.65 V) 1402 1494

The capacity numbers seemed a bit low to me, even for an incomplete discharge to 2.75 volts instead of 2.5 volts. I decided to continue testing with a more accurate (and calibratable) battery fuel gauge board, revealing the true capacity of this battery… and it’s as good as the manufacturer promised!

Test 2: Texas Instruments bq78z100 fuel gauge

After realizing that my dedicated battery analyzer wasn’t quite as accurate as I wanted, I decided to use a battery fuel gauge board that I had more confidence in. I had previously built a board based on the Texas Instruments bq78z100, a dedicated battery management system (BMS) on a chip, aimed at 1S and 2S battery packs. This allowed me to calibrate the voltage and current measurements against my Agilent/Keysight U1253B multimeter and therefore get the most accurate results, as I can also program the bq78z100’s autonomous protection features to provide precise control over the discharge cutoff voltage during testing.

Example of the bq78z100-based fuel gauge board being connected to the XTAR 3600mAh protected 18650 battery. (note: actual setup is different than as pictured)

XTAR 3600 mAh protected 18650 battery connected to the bq78z100-based fuel gauge board. (note: actual setup is different than as pictured)

Using this circuit, I was able to get multiple measurements of the battery’s capacity with the help of a Texas Instruments GDK (Gauge Development Kit) as a charger and adjustable load for some of the capacity tests; and an Arachnid Labs re:load 2 adjustable constant-current load, with some additional forced-air cooling for the higher discharge rates.

I suspect that if I had a more, erm, “professional” setup, I could extract even more capacity from the battery, as additional resistance between the battery and the fuel gauge board will cause a voltage drop and subsequent loss in measured capacity.

Available capacity vs. end-of-discharge voltage

This is one of the most interesting test results, in my opinion. Many 1S/”single-cell” devices can’t take full advantage of modern NMC (nickel-manganese-cobalt) cells whose end-of-discharge voltage is below 3 volts, and the following chart helps quantify how much capacity can be extracted if one stops earlier than that. One advantage of reducing the depth of discharge, however, is that it can extend the battery’s cycle life and reduce the amount of capacity loss it experiences with age. As long as you discharge all the way to 2.5 volts and at a modest rate, the XTAR 3600 mAh battery achieves its rated capacity, and then some!

Chart plotting the XTAR 3600mAh protected 18650's voltage, compared to discharge capacity and discharge rate.

XTAR 3600 mAh protected 18650’s voltage/capacity curves at various discharge rates. (click here for full-size chart)

Discharge Rate Capacity/DoD@ 3.3V
(Rel/Abs)
Capacity/DoD @ 3.0V
(Rel/Abs)
Capacity/DoD @ 2.75V
(Rel/Abs)
Capacity/DoD @ 2.5V
(Rel/Abs)
C/10
(350 mA)
2616 mAh
72.4%/72.4%
3357 mAh
92.9%/92.9%
3539 mAh
97.8%/97.8%
3613 mAh
100%/100%
C/5
(720 mA)
2555 mAh
(N/A)/70.7%
3250 mAh
(N/A)/90.0%
(No Data) (No Data)
C/2
(1800 mA)
2310 mAh
67.2%/64.0%
3076 mAh
89.4%/85.1%
3338 mAh
97.1%/92.3%
3438 mAh
100%/95.2%
1C
(3600 mA)
1995 mAh
57.2%/55.2%
2874 mAh
82.4%/79.5%
3330 mAh
95.5%/92.2%
3486 mAh
100%/96.5%

Note that “relative” and “absolute” percentages correspond to the 2.5 volt discharge values for each row and the C/10 rate, respectively. The curve for the C/5 discharge rate ends early, as that data was collected with my Texas Instruments GDK (Gauge Development Kit), which has a hardcoded discharge cutoff at 2.9 volts. One oddity of the 1C curve is how it actually gets slightly more capacity than the C/2 rate; I suspect this is because the internal resistance of the battery was decreasing due to cell heating, allowing the lithium ions to travel across the cell’s separator more easily. Additionally, the capacity under-reporting issue I was having with the EBL tester is more visible in this chart, since a rough extrapolation would bring the discharge curve at 500 mA around 3550 mAh, which is a fair amount higher than the measured ~3340 mAh. If I have the time to recollect some data, I’ll add it to the chart.

Thermal performance

I don’t have the equipment to test beyond a 1C discharge rate, but even at a C/2 discharge rate I noticed significant heating of the battery; nothing disconcerting but still noteworthy. At 1C, the temperature rose to just over 40 degrees Celsius (104 degrees Fahrenheit) by the end of the discharge cycle – definitely warm to the touch but not burning hot. However, I imagine that a discharge rate at 2C or even higher will result in battery temperatures exceeding 50 degrees Celsius (122 degrees Fahrenheit), hot enough to be uncomfortable to hold. Note that lithium-ion batteries should not be charged if they are warmer than 45 degrees Celsius (113 degrees Fahrenheit).

Chart plotting the XTAR 3600mAh protected 18650's temperature, compared to discharge capacity and discharge rate.

XTAR 3600 mAh protected 18650’s temperature curves at various discharge rates. (click here for full-size chart)

Chemistry analysis

After running the data taken at a C/10 discharge rate through TI’s online tool, GPCCHEM, I was able to get two chemistry IDs that would allow me to get an accurate model of the cells for use in their Impedance Track line of fuel gauges.

Chemistry ID (hex) Chemistry Description Cell match Max DoD error (%) Max Ra deviation ratio
5267 LiMn2O4 (Co,Ni)/carbon, 4.2 V Bak: N18650CP (3350 mAh) 2.3 0.27
5634 LiMn2O4 (Co,Ni)/carbon, 4.2 V NanoGraf: INR-18650-M38A (3800 mAh) 2.72 0.61

Don’t pay too much attention to the details; they are provided for informational purposes only and mainly of use for those that want to use these batteries with TI’s fuel gauge chips. The specific models listed are not a guarantee that these batteries actually are that cell model, but it does confirm that our data is otherwise trustworthy; both listed models are high-capacity cells and use a chemistry system containing nickel, manganese, and cobalt (NMC for short), and such chemistries tend to have a relatively high capacity at the expense of lower terminal voltage (3.6 V nominal, 2.5 V at end of discharge; versus the typical 3.7 V nominal and 3.0 V end of discharge).

Conclusion

After my testing, I can confidently say that XTAR’s 3600 mAh 18650 really does achieve its rated capacity. As long as your target device is capable of discharging to 2.5 volts and at a modest rate, it will be able to get the most out of this battery.

If you want to purchase this battery, you can do so from their online store: https://xtardirect.com/products/18650-3600mah-battery?VariantsId=10393

At the time of writing, XTAR is selling this battery at $11.90 USD for a single battery, or $23.80 USD for a two-pack with carrying case.

Downloads

The datasheet for the battery can be found here: https://www.dropbox.com/s/wibdm2ty6zfa7xo/XTAR%2018650%203600mAh%20Specification.pdf?dl=1

Advertisement

Ramble: 2020 in review

It goes without saying that 2020 has turned out to be… less than optimal for most of us. The COVID-19 pandemic has wreaked havoc on virtually every facet of our lives, with innumerable losses around the world. Even with all the (involuntary) free time added due to local lockdown(s) and other social restrictions, I’ll admit that I haven’t made nearly as many blog posts this year as I would have liked. However, that doesn’t mean that people haven’t stopped reading my content – let’s see how this year has fared in that regard.

Top 5 Posts

Post Views
(New!) Reverse-engineering SanDisk’s High Endurance SD card 7,717
KitchenAid induction cooktop service manual 4,185
Building my own SD card with an eMMC chip 4,082
Resurrecting a dead MacBook Pro 3,649
SIM card PIN recovery with a logic analyzer 2,952

The one post that did make it out of the Drafts folder this year was my adventure in reverse-engineering a SanDisk high-endurance memory card in order to see what specific kind of NAND Flash was being used, earning over 7,500 views this year; these views primarily came from Hackaday, Twitter and Reddit.

The other posts are classics that people still love to read about. Once again, the KitchenAid service manual’s popularity indicates that KitchenAid/Whirlpool still hasn’t done anything to fix their faulty designs – and the comments suggest that the (ongoing as of the writing of this post) pandemic has made it even more difficult to get damaged cooktops serviced!

Views

This year ranks 5th out of 8 years of running my blog, amassing 102,420 views this year. Given the relative lack of featured posts on sites like Hackaday, this drop in views is largely expected – but I’m still happy to have surpassed the 100,000 view mark this year.

WordAds Performance

This year’s WordAds performance is the worst I’ve seen since I joined back in late 2017. Despite serving 933,470 ads this year, this only netted $98 USD, yielding a CPM (cost per 1,000 ad impressions) of merely $0.10! This is in stark contrast with even last year’s CPM of $0.24 – and that was already on a huge downslope! The lowest point was in June, with a CPM of merely $0.04, and a revenue of $5.21 respectively. However, the CPM rates have begun to improve to late 2019 levels since October.

Of course, this isn’t unexpected as the COVID-19 pandemic has decimated the world’s economy, and online advertisements are no exception. Consequently, this year’s revenue isn’t even enough to cover my hosting costs, but it still helps make running the blog less painful for my wallet.

Looking Forward

After this year’s tumultuous chain of events (and not being quite out of the woods in terms of COVID-19 vaccination and elimination), hopefully 2021 will be a reprieve after 2020’s parade of constant interruptions and unforeseen consequences. Hopefully the upcoming year will be a lot better for all of us. I want to get a lot of posts out of the Drafts folder, as I’ve been working on a lot of projects – but blog posts (sadly) don’t write themselves! Stay tuned…

Happy New Year! Here’s to hoping 2021 won’t be nearly as disastrous as 2020; once again, many thanks to everyone who reads and shares my blog posts – it means a lot to me! –Jason

Reverse-engineering and analysis of SanDisk High Endurance microSDXC card

As seen on Hackaday!

TL;DR – The SanDisk High Endurance cards use SanDisk/Toshiba 3D TLC Flash. It took way, way more work than it should have to figure this out (thanks for nothing, SanDisk!).
In contrast, the SanDisk MAX Endurance uses the same 3D TLC in pMLC (pseudo-multi-level cell) mode.

In a previous blog post, I took a look at SanDisk’s microSD cards that were aimed for use in write-intensive applications like dash cameras. In that post I took a look at its performance metrics, and speculated about what sort of NAND Flash memory is used inside. SanDisk doesn’t publish any detailed specifications about the cards’ internal workings, so that means I have no choice but to reverse-engineer the can of worms card myself.

The Help Desk That Couldn’t

In the hopes of uncovering more information, I sent an email to SanDisk’s support team asking about what type of NAND Flash they are using in their High Endurance lineup of cards, alongside endurance metrics like P/E (Program/Erase) cycle counts and total terabytes written (TBW). Unfortunately, the SanDisk support rep couldn’t provide a satisfactory answer to my questions, as they’re not able to provide any information that’s not listed in their public spec sheets. They said that all of their cards use MLC Flash, which I guess is correct if you call TLC Flash 3-bit MLC (which Samsung does).

Dear Jason,

Thank you for contacting SanDisk® Global customer care. We really appreciate you being a part of our SanDisk® family.

I understand that you wish to know more about the SanDisk® High Endurance video monitoring card, as such please allow me to inform you that all our SanDisk® memory cards uses Multi level cell technology (MLC) flash. However, the read/write cycles for the flash memory is not published nor documented only the read and write speed in published as such they are 100 MB/S & 40 MB/s. The 64 GB card can record Full HD video up to 10,000 hours. To know more about the card you may refer to the link below:

SANDISK HIGH ENDURANCE VIDEO MONITORING microSD CARD

Best regards,
Allan B.
SanDisk® Global Customer Care

I’ll give them a silver star that says “You Tried” at least.

Anatomy of an SD Card

While (micro)SD cards feel like a solid monolithic piece of technology, they’re made up of multiple different chips, each performing a different role. A basic SD card will have a controller that manages the NAND Flash chips and communicates with the host (PC, camera, etc.), and the NAND Flash itself (made up of 1 or more Flash dies). Bunnie Huang’s blog, Bunnie Studios, has an excellent article on the internals of SD cards, including counterfeits and how they’re made – check it out here.

SD Card Anatomy

Block diagram of a typical SD card.

MicroSD cards often (but not always!) include test pads, used to program/test the NAND Flash during manufacture. These can be exploited in the case of data recovery, or to reuse microSD cards that have a defective controller or firmware by turning the card into a piece of raw NAND Flash – check out Gough Lui’s adventures here. Note that there is no set standard for these test pads (even for the same manufacturer!), but there are common patterns for some manufacturers like SanDisk that make reverse-engineering easier.

Crouching Controller, Hidden Test Pads

microSD cards fall into a category of “monolithic” Flash devices, as they combine a controller and raw NAND Flash memory into a single, inseparable package. Many manufacturers break out the Flash’s data bus onto hidden (and nearly completely undocumented) test pads, which some other memory card and USB drive manufacturers take advantage of to make cheap storage products using failed parts; the controller can simply be electrically disabled and the Flash is then used as if it were a regular chip.

In the case of SanDisk cards, there is very limited information on their cards’ test pad pinouts. Each generation has slight differences, but the layout is mostly the same. These differences can be fatal, as the power and ground connections are sometimes reversed (this spells instant death for a chip if its power polarity is mixed up!).

CORRECTION (July 22, 2020): Upon further review, I might have accidentally created a discrepancy between the leaked pinouts online, versus my own documentation in terms of power polarity; see the “Test Pad Pinout” section.

My card (and many of their higher-end cards – that is, not their Ultra lineup) features test pads that aren’t covered by solder mask, but are instead covered by some sort of screen-printed epoxy with a laser-etched serial number on top. With a bit of heat and some scraping, I was able to remove the (very brittle) coating on top of the test pads; this also removed the serial number which I believe is an anti-tamper measure by SanDisk.

After cleaning off any last traces of the epoxy coating, I was greeted with the familiar SanDisk test pad layout, plus a few extra on the bottom.

Building the Breakout Board

The breakout board is relatively simple in concept: for each test pad, bring out a wire that goes to a bigger test point for easier access, and wire up the normal SD bus to an SD connector to let the controller do its work with twiddling the NAND Flash bus. Given how small each test pad is (and how many), things get a bit… messy.

I started by using double-side foam adhesive tape to secure the SD card to a piece of perfboard. I then tinned all of the pads and soldered a small 1uF ceramic capacitor across the card’s power (Vcc) and ground (GND) test pads. Using 40-gauge (0.1 mm, or 4-thousandths of an inch!) magnet wire, I mapped each test pad to its corresponding machine-pin socket on the perfboard. Including the extra test pads, that’s a total of 28 tiny wires!

For the SD connector side of things, I used a flex cable for the XTC 2 Clip (a tool used to service HTC Android devices), as it acted as a flexible “remote SD card” and broke out the signals to a small ribbon cable. I encased the flex cable with copper tape to act as a shield against electrical noise and to provide physical reinforcement, and soldered the tape to the outer pads on the perfboard for reinforcement. The ribbon cable end was then tinned and each SD card’s pin was wired up with magnet wire. The power lines were then broken out to an LED and resistor to indicate when the card was receiving power.

Bus Analysis

With all of the test pads broken out to an array of test pins, it was time to make sense of what pins are responsible for accessing the NAND Flash inside the card.

Test Pad Pinout

Diagram of the test pads on SanDisk's High Endurance microSD card.

Diagram of the test pads on SanDisk’s High Endurance microSD card. (click to enlarge)

The overall test pad pinout was the same for other microSD cards from SanDisk, but there were some differences, primarily regarding the layout of the power pads; notably, the main power pins are backwards! This can destroy the card if you’re not careful when applying power.

CORRECTION (July 22, 2020): I might actually have just gotten my own documentation mixed up in terms of the power and ground test pads. Regardless, one should always be careful to ensure the correct power polarity is sent to a device.

I used my DSLogic Plus logic analyzer to analyze the signals on all of the pins. Since the data pinout was previously discovered, the hard part of figuring out what each line represented (data bus, control, address, command, write-protect, ready/busy status) was already done for me. However, not all of the lines were immediately evident as the pinouts I found online only included the bare minimum of lines to make the NAND Flash accessible, with one exception being a control line that places the controller into a reset state and releases its control of the data lines (this will be important later on).

By sniffing the data bus at the DSLogic’s maximum speed (and using its 32 MB onboard buffer RAM), I was able to get a clear snapshot of the commands being sent to the NAND Flash from the controller during initialization.

Bus Sniffing & NAND I/O 101 (writing commands, address, reading data)

In particular, I was looking for two commands: RESET (0xFF), and READ ID (0x90). When looking for a command sequence, it’s important to know how and when the data and control lines change. I will try to explain it step-by-step, but if you’re interested there is an introductory white paper by Micron that explains all of the fundamentals of NAND Flash with much more information about how NAND Flash works.

When a RESET command is sent to the NAND Flash, first the /CE (Chip Select, Active Low) line is pulled low. Then the CLE (Command Latch Enable) line is pulled high; the data bus is set to its intended value of 0xFF (all binary ones); then the /WE (Write Enable, Active Low) line is pulsed from high to low, then back to high again (the data bus’ contents are committed to the chip when the /WE line goes from low to high, known as a “rising edge”); the CLE line is pulled back low to return to its normal state. The Flash chip will then pull its R/B (Ready/Busy Status) line low to indicate it is busy resetting itself, then releases the line back to its high state when it’s finished.

The READ ID command works similarly, except after writing the command with 0x90 (binary 1001 0000) on the data bus, it then pulls the ALE (Address Latch Enable) line high instead of CLE, and writes 0x00 (all binary zeroes) by pulsing the /WE line low. The chip transfers its internally programmed NAND Flash ID into its internal read register, and the data is read out from the device on each rising edge of the /RE (Read Enable, Active Low) line; for most devices this is 4 to 8 bytes of data.

NAND Flash ID

For each NAND Flash device, it has a (mostly) unique ID that identifies the manufacturer, and other functional data that is defined by that manufacturer; in other words, only the manufacturer ID, assigned by the JEDEC Technology Association, is well-defined.

The first byte represents the Flash manufacturer, and the rest (2 to 6 bytes) define the device’s characteristics, as set out by the manufacturer themselves. Most NAND vendors are very tight-lipped when it comes to datasheets, and SanDisk (and by extension, Toshiba/Kioxia) maintain very strict control, save for some slightly outdated leaked Toshiba datasheets. Because the two aforementioned companies share their NAND fabrication facilities, we can reasonably presume the data structures in the vendor-defined bytes can be referenced against each other.

In the case of the SanDisk High Endurance 128GB card, it has a NAND Flash ID of 0x45 48 9A B3 7E 72 0D 0E. Some of these values can be compared against a Toshiba datasheet:

Byte Value (Hex) Description/Interpretation
45
  • Manufacturer: SanDisk
48
  • I/O voltage: Presumed 1.8 volts (measured with multimeter)
  • Device capacity: Presumed 128 GB (unable to confirm against datasheet)
9A
  • NAND type: TLC (Triple-Level Cell / 3 bits per cell)
  • Flash dies per /CE: 4 (card uses four 32GB Flash chips internally)
B3
  • Block size: 12 MiB (768 pages per block) excluding spare area (determined outside datasheet)
  • Page size: 16,384 bytes / 16 kiB excluding spare area
7E
  • Planes per /CE: 8 (2 planes per die)
72
  • Interface type: Asynchronous
  • Process geometry: BiCS3 3D NAND (determined outside datasheet)
0D
  • Unknown (no information listed in datasheet)
0E
  • Unknown (no information listed in datasheet)

Although not all byte values could be conclusively determined, I was able to determine that the SanDisk High Endurance cards use BiCS3 3D TLC NAND Flash, but at least it is 3D NAND which improves endurance dramatically compared to traditional/planar NAND. Unfortunately, from this information alone, I cannot determine whether the card’s controller takes advantage of any SLC caching mechanisms for write operations.

The chip’s process geometry was determined by looking up the first four bytes of the Flash ID, and cross-referencing it to a line from a configuration file in Silicon Motion’s mass production tools for their SM3271 USB Flash drive controller, and their SM2258XT DRAM-less SSD controller. These tools revealed supposed part numbers of SDTNAIAMA-256G and SDUNBIEMM-32G respectively, but I don’t think this is accurate for the specific Flash configuration in this card.

External Control

I wanted to make sure that I was getting the correct ID from the NAND Flash, so I rigged up a Texas Instruments MSP430FR2433 development board and wrote some (very) rudimentary code to send the required RESET and READ ID commands, and attempt to extract any extra data from the chip’s hidden JEDEC Parameter Page along the way.

My first roadblock was that the MSP430 would reset every time it attempted to send the RESET command, suggesting that too much current was being drawn from the MSP430 board’s limited power supply. This can occur during bus contention, where two devices “fight” each other when trying to set a certain digital line both high and low at the same time. I was unsure what was going on, since publicly-available information didn’t mention how to disable the card’s built-in controller (doing so would cause it to tri-state the lines, effectively “letting go” of the NAND bus and allowing another device to take control).

I figured out that the A1 test pad (see diagram) was the controller’s reset line (pulsing this line low while the card was running forced my card reader to power-cycle it), and by holding the line low, the controller would release its control of the NAND Flash bus entirely. After this, my microcontroller code was able to read the Flash ID correctly and consistently.

Reading out the card's Flash ID with my own microcontroller code.

Reading out the card’s Flash ID with my own microcontroller code.

JEDEC Parameter Page… or at least what SanDisk made of it!

The JEDEC Parameter Page, if present, contains detailed information on a Flash chip’s characteristics with far greater detail than the NAND Flash ID – and it’s well-standardized so parsing it would be far easier. However, it turns out that SanDisk decided to ignore the standard format, and decided to use their own proprietary Parameter Page format! Normally the page starts with the ASCII string “JEDEC”, but I got a repeating string of “SNDK” (corresponding with their stock symbol) with other data that didn’t correspond to anything like the JEDEC specification! Oh well, it was worth a try.

I collected the data with the same Arduino sketch as shown above, and pulled 1,536 bytes’ worth of data; I wrote a quick program on Ideone to provide a nicely-formatted hex dump of the first 512 bytes of the Parameter Page data:

Offset 00:01:02:03:04:05:06:07:08:09:0A:0B:0C:0D:0E:0F 0123456789ABCDEF
------ --+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- ----------------
0x0000 53 4E 44 4B 53 4E 44 4B 53 4E 44 4B 53 4E 44 4B SNDKSNDKSNDKSNDK
0x0010 53 4E 44 4B 53 4E 44 4B 53 4E 44 4B 53 4E 44 4B SNDKSNDKSNDKSNDK
0x0020 08 08 00 08 06 20 00 02 01 48 9A B3 00 05 08 41 ..... ...H.....A
0x0030 48 63 6A 08 08 00 08 06 20 00 02 01 48 9A B3 00 Hcj..... ...H...
0x0040 05 08 41 48 63 6A 08 08 00 08 06 20 00 02 01 48 ..AHcj..... ...H
0x0050 9A B3 00 05 08 41 48 63 6A 08 08 00 08 06 20 00 .....AHcj..... .
0x0060 02 01 48 9A B3 00 05 08 41 48 63 6A 08 08 00 08 ..H.....AHcj....
0x0070 06 20 00 02 01 48 9A B3 00 05 08 41 48 63 6A 08 . ...H.....AHcj.
0x0080 08 00 08 06 20 00 02 01 48 9A B3 00 05 08 41 48 .... ...H.....AH
0x0090 63 6A 08 08 00 08 06 20 00 02 01 48 9A B3 00 05 cj..... ...H....
0x00A0 08 41 48 63 6A 08 08 00 08 06 20 00 02 01 48 9A .AHcj..... ...H.
0x00B0 B3 00 05 08 41 48 63 6A 08 08 00 08 06 20 00 02 ....AHcj..... ..
0x00C0 01 48 9A B3 00 05 08 41 48 63 6A 08 08 00 08 06 .H.....AHcj.....
0x00D0 20 00 02 01 48 9A B3 00 05 08 41 48 63 6A 08 08  ...H.....AHcj..
0x00E0 00 08 06 20 00 02 01 48 9A B3 00 05 08 41 48 63 ... ...H.....AHc
0x00F0 6A 08 08 00 08 06 20 00 02 01 48 9A B3 00 05 08 j..... ...H.....
0x0100 41 48 63 6A 08 08 00 08 06 20 00 02 01 48 9A B3 AHcj..... ...H..
0x0110 00 05 08 41 48 63 6A 08 08 00 08 06 20 00 02 01 ...AHcj..... ...
0x0120 48 9A B3 00 05 08 41 48 63 6A 08 08 00 08 06 20 H.....AHcj..... 
0x0130 00 02 01 48 9A B3 00 05 08 41 48 63 6A 08 08 00 ...H.....AHcj...
0x0140 08 06 20 00 02 01 48 9A B3 00 05 08 41 48 63 6A .. ...H.....AHcj
0x0150 08 08 00 08 06 20 00 02 01 48 9A B3 00 05 08 41 ..... ...H.....A
0x0160 48 63 6A 08 08 00 08 06 20 00 02 01 48 9A B3 00 Hcj..... ...H...
0x0170 05 08 41 48 63 6A 08 08 00 08 06 20 00 02 01 48 ..AHcj..... ...H
0x0180 9A B3 00 05 08 41 48 63 6A 08 08 00 08 06 20 00 .....AHcj..... .
0x0190 02 01 48 9A B3 00 05 08 41 48 63 6A 08 08 00 08 ..H.....AHcj....
0x01A0 06 20 00 02 01 48 9A B3 00 05 08 41 48 63 6A 08 . ...H.....AHcj.
0x01B0 08 00 08 06 20 00 02 01 48 9A B3 00 05 08 41 48 .... ...H.....AH
0x01C0 63 6A 08 08 00 08 06 20 00 02 01 48 9A B3 00 05 cj..... ...H....
0x01D0 08 41 48 63 6A 08 08 00 08 06 20 00 02 01 48 9A .AHcj..... ...H.
0x01E0 B3 00 05 08 41 48 63 6A 08 08 00 08 06 20 00 02 ....AHcj..... ..
0x01F0 01 48 9A B3 00 05 08 41 48 63 6A 08 08 00 08 06 .H.....AHcj.....

Further analysis with my DSLogic showed that the controller itself requests a total of 4,128 bytes (4 kiB + 32 bytes) of Parameter Page data, which is filled with the same repeating data as shown above.

Reset Quirks

When looking at the logic analyzer data, I noticed that the controller sends the READ ID command twice, but the first time it does so without resetting the Flash (which should normally be done as soon as the chip is powered up!). The data that the Flash returned was… strange to say the least.

Byte Value (Hex) Interpreted Value
98
  • Manufacturer: Toshiba
00
  • I/O voltage: Unknown (no data)
  • Device capacity: Unknown (no data)
90
  • NAND type: SLC (Single-Level Cell / 1 bit per cell)
  • Flash dies per /CE: 1
93
  • Block size: 4 MB excluding spare area
  • Page size: 16,384 bytes / 16 kiB excluding spare area
76
  • Planes per /CE: 2
72
  • Interface type: Asynchronous
  • Process geometry: 70 nm planar

This confused me initially when I was trying to find the ID from the logic capture alone; after talking to a contact who has experience in NAND Flash data recovery, they said this is expected for SanDisk devices, which make liberal use of vendor-proprietary commands and data structures. If the fourth byte is to be believed, it says the block size is 4 megabytes, which I think is plausible for a modern Flash device. The rest of the information doesn’t really make any sense to me apart from the first byte indicating the chip is made by Toshiba.

Conclusion

I shouldn’t have to go this far in hardware reverse-engineering to just ask a simple question of what Flash SanDisk used in their high-endurance card. You’d think they would be proud to say they use 3D NAND for higher endurance and reliability, but I guess not!

Downloads

For those that are interested, I’ve included the logic captures of the card’s activity shortly after power-up. I’ve also included the (very crude) Arduino sketch that I used to read the NAND ID and Parameter Page data manually:

Ramble: 2019 in review

UPDATE (May 20, 2020): Nah, 2020 is busted.

As the year of 2019 (and the decade we like to call the 2010s) draws to a close, it’s amazing how fast time has gone by. I first started this blog in September of 2012, mainly as a “brain dump” of whatever electronics stuff I wanted to talk about – and at its heart, it still is… and I have no intent to change that.

Pretty Popular Posts

Fresh off the (Word)Press

Some of this year’s posts have gotten a good handful of views. The top 5 of my posts each managed to snag at least a couple thousand views:

Post Views
Resurrecting a dead MacBook Pro 5,416
Adding external PCI Express to the Atomic Pi 3,081
Running Doom on a Magellan GPS receiver 2,819
Damaged eMMC data recovery 2,374
SIM card PIN recovery with a logic analyzer 2,229

The fifth in the list is different in that none of its views came from Hackaday (they haven’t responded to the e-mail tip I sent to them on December 21st); rather, the views came from Twitter when a security researcher found my blog post after I put it on Reddit. Hopefully Hackaday gets around to featuring that post in the near future.

Old Classics

Some of my previous posts just can’t be put down by readers, with some posts proving to be unexpectedly popular:

Post Views
Building my own eMMC-based SD card 5,474
Running Doom on a Keysight oscilloscope 6,399
KitchenAid induction cooktop service manual 3,541
Kentli PH5 Li-ion AA battery teardown 2,716
Kentli PH5 Li-ion AA battery review 2,566

What’s interesting is how many of these older posts managed to outclass this year’s big hits – one that I found to be surprising was the Kitchenaid service manual! Looks like their induction cooktops (still) have widespread design problems that cause their power transistors to blow up. Additionally, the Kentli PH5 Li-ion AA batteries still manage to pique people’s curiosities, even five years after I first published that post!

Views? Views.

This year wasn’t as popular as previous years, with this one ranking 4th out of the 7 years I’ve run this blog. This year garnered over 116,000 views by over 56,450 visitors, with the most views coming from the United States (32,600), the United Kingdom (6,500), Canada (6,200), Germany (5,850), and Russia (3,390). However, if I discount last year’s 15,000 views that came from Hacker News (I wasn’t able to get any significant attention on that site this year), then this puts me a fair bit ahead of last year, which would have otherwise had only 110,000 views.

Regardless, this satisfies my (personal) goal of at least 100,000 views per year, and I’m still glad that my blog still gets people’s attention.

WordAd¢ (I can’t call it WordAd$ this year)

This year’s ad revenue has been pretty paltry compared to last year. My previous update that tallied revenue from January to September 2019 revealed that my ad rates have fallen by almost two-thirds compared to last year. This year served up approximately 552,000 ads and yielded $130 USD, with an average of 24 cents per 1,000 ad views (aka CPM). If I used last year’s average CPM figure of 57 cents, I would have received almost $315 USD! Hopefully 2020 turns out better, but I’m not feeling too optimistic about it.

Looking Forward

This year represents some significant changes to my personal and professional life. I’ve finished my time in post-secondary, and have graduated from a network-specialized information technology (IT) program at my local college; I then took some time off to meet with new and old friends at smaller get-togethers and larger conventions. This leads me to the next phase in my life – get some stable full-time work in the real world (and hopefully still have time to do fun electronics projects that I can share with you on this blog).

Buckle up – it’s going to be one heck of a ride into the new decade! With all that said…

Happy New Year! Thanks to everyone who views and shares my work – you make all of this worthwhile! –Jason

 

Recovering the SIM card PIN from the ZTE WF721 cellular home phone

As seen on Hackaday!

TL;DR – If you have a ZTE WF721 that’s PIN-locked your SIM card, try 2376.

Recently I picked up a used Samsung Galaxy Core LTE smartphone from a relative after they upgraded to an iPhone. As the Core LTE is a low-end smartphone, I suspected that the phone was SIM locked to its original carrier (Virgin Mobile), but in order to test this I needed a different SIM card. My personal phone was on the same carrier, so I wasn’t able to use that to test it.

However, the previous summer I picked up a ZTE WF721 cellular home phone base station (that is, it’s a voice-only cell phone that a landline phone plugs into), which came with a Telus SIM card. The issue is that the WF721 sets a SIM card PIN to essentially “lock” the card to the base station, and it wasn’t the default 1234 PIN; brute-forcing a SIM card is not possible as you get 3-5 attempts before the card needs to be unblocked using a PUK (PIN Unblock Key), failing that, the card is permanently rendered unusable. I decided to take the base station apart, and use my knowledge in electronics and previous research into smart cards to see if I could recover the PIN.

(Yes, I went through all this work instead of just buying a prepaid SIM card from the dollar store. I’m weird like that.)

Test Pads & Signals

After a bit of disassembly work involving removing screws hidden under rubber non-slip feet and a lot of spudgering open plastic clips, I got access to the four test pads that connect to the SIM card, accessible on the opposite side of the PCB from the SIM card socket.

ZTE WF721 Opened

The ZTE WF721 opened, with test pads broken out and connected to DSLogic for reverse engineering.

An ISO 7816-compliant smart card (and a SIM card is one) require 5 different lines to work: Vcc (power), ground, clock, I/O (data), and reset. The I/O is an asynchronous half-duplex UART-type interface, whose baud rate is determined by the card’s characteristics and the clock frequency that it is given by the reader (in this case, the WF721). The details of how the interface work can be obtained for free in their TS 102 221 specification from the ETSI (European Telecommunications Standards Institute).

ZTE WF721 SIM Card Test Pads

The test pads that connect to the WF721’s SIM card socket.

I then soldered the ground wire to a free test pad elsewhere on the board, whereas the four other wires were soldered to the test pads near the SIM card socket. I then connected these wires to a pin header and plugged it into my DSLogic Plus logic analyzer. I analyzed the logic captures after turning the WF721 on and allowing it to initialize the SIM card and attempt to connect to the cellular network (the service to it has been disconnected so it doesn’t actually succeed).

Command Analysis

After looking at the raw logic capture, there was a lot that I had to sift through. I needed to create a custom setting for the UART decoder as the serial output isn’t your traditional “9600-8-N-1” setting. Rather, the interface uses 8 data bits, even parity, and 2 stop bits. The baud rate is determined by a parameter in the card’s initial identification, the ATR (Answer to Reset). I parsed the card’s ATR that I previously captured on the PC using the SpringCard PC/SC Diagnostic tool using Ludovic Rousseau’s online tool, I determined I needed to use a baud rate of 250 kbit/s, since the card was being fed a 4 MHz clock.

T=0 Smart Card Command (APDU) Structure

The smart card communicates to and from the host through APDUs (Application Protocol Data Units). The command header for a T=0 smart card (character-based I/O, which most cards use) is made up of 5 bytes: class, instruction, 2 parameter bytes and a length/3rd parameter byte. To acknowledge the command, the card sends the instruction byte back to the reader and the data is transferred to/from the card, depending on the command used. The card then sends two status bytes that indicate whether the command is successful; if it is, the response is 0x9000. A graphical representation of this process can be seen in the next section.

VERIFY PIN Command Decoding

The raw data structure of a SIM card's VERIFY PIN command. Each part of the flow is labeled for ease of understanding.

The raw data structure of a SIM card’s VERIFY PIN command. Each part of the flow is labeled for ease of understanding (click image to see full size).

The command I’m looking for is 0x20 (VERIFY PIN), and I had to sift through the command flow in the logic analyzer until I found it. After a lot of preceding commands, I found the command I was looking for, and I found the PIN… and it’s in plaintext! As it turns out, it is sent as an ASCII string, but it’s not null terminated like a regular string. Instead, the data is always 8 bytes (allowing up to an 8-digit PIN), but a PIN shorter than 8 digits will have the end bytes padded with 0xFF (all binary ones). It was easy to determine that the bytes 0x32 33 37 36 is the ASCII representation of the PIN 2376, and after the card waited many tens of milliseconds, it acknowledged the PIN was correct as it gave the expected 0x9000 response code.

PIN Testing & Unlocking

SIM Opened in Dekart SIM Manager

Dekart SIM Manager showing the phone number programmed into the SIM card (censored for privacy).

I tried the PIN in the Dekart SIM Manager software on my computer, and it worked! I was able to read out the contents of the SIM and find out what phone number it used to have, although no other useful information was found.

By using the legacy GSM class 0xA0, I was able to manually verify the PIN by directly communicating with the SIM card using the same command syntax in PC/SC Diagnostic:

SIM Card VERIFY PIN Test

Testing the VERIFY PIN command directly in SpringCard PC/SC Diagnostic.

I took the SIM card out and put it in my Galaxy Core LTE phone, entered the PIN, and as expected it brought up the network unlock prompt. I was able to contact my carrier to get the phone unlocked, and they did it for free (as legally required in Canada) – it turned out to be helpful I was on the same network, as they needed an account to authenticate the request against, even if the phone is registered to another account holder. After entering the 8-digit unlock PIN they provided, the phone was successfully unlocked!

The WF721 is in all likelihood also network locked, but that’s a bridge I haven’t crossed yet.

Conclusion

After a bit of sleuthing into how SIM cards communicate with a cell phone, I was able to decipher the exact command used to authenticate a SIM card PIN inside a disused cellular home phone, all to check if a hand-me-down smartphone was network-locked to its original carrier. Was it a lot of effort just to do that? Maybe, but where’s the fun in buying a prepaid SIM card? 🙂

 

Quick Update: Jumping Back On the (Free)wagon

After trying out the WordPress Personal plan earlier this year, I was curious as to whether upgrading to a paid plan would result in improved earnings when using the WordAds revenue program.

It doesn’t. If you’re making money on the Free plan, stick to it.

Considering it’s 4:10 AM at the time of this blog post, I should be already in bed but I was up doing some repair work. I got an email saying my subscription for my blog was expiring and I instinctively paid for it, but realized it was for the WordPress Personal plan I was planning to ditch once it lapsed. Immediately after paying I cancelled the subscription, thus ending the plan earlier than I intended. At least the refund process was quick and easy.

WordPress Personal Plan Cancelled

Ever complete a purchase and immediately think “Wait, this isn’t what I intended to buy”? That was me just now.

WordAds Adventures, Episode 5: 2019 in review (January to September)

Time sure flies by – it’s already been almost two years since I first joined the WordAds program, and over a year since my last WordAds update. Let’s see how we’ve done…

Results from July 2018 to September 2019

Earnings Data

Payout Period Revenue Ads Views Visitors CPM Ads/View Ads/Visitor
1 Jul 2018  $     8.19     18,581       6,861       3,191  $   0.441 2.708 5.823
Aug 2018  $   11.64     21,633       8,103       3,931  $   0.538 2.670 5.503
Sep 2018  $   10.66     21,082       8,251       3,721  $   0.506 2.555 5.666
Oct 2018  $   16.66     34,896     11,976       6,044  $   0.477 2.914 5.774
Nov 2018  $   26.20     40,856     15,520       6,709  $   0.641 2.632 6.090
Dec 2018  $   23.31     38,538     11,786       6,119  $   0.605 3.270 6.298
2 Jan 2019  $   12.71     40,856     13,028       5,791  $   0.311 3.136 7.055
Feb 2019  $     9.91     42,626     11,334       5,773  $   0.232 3.761 7.384
Mar 2019  $     7.45     32,798       9,230       4,427  $   0.227 3.553 7.409
Apr 2019  $     4.20     20,456       6,320       2,870  $   0.205 3.237 7.128
May 2019  $     5.79     26,557       6,414       2,910  $   0.218 4.140 9.126
Jun 2019  $   14.36     48,428       9,821       3,977  $   0.297 4.931 12.177
Jul 2019  $   21.19     87,689     16,222       8,819  $   0.242 5.406 9.943
Aug 2019  $   10.17     44,227       8,376       4,083  $   0.230 5.280 10.832
Sep 2019  $   11.78     54,143     10,155       5,264  $   0.218 5.332 10.286

July 2018 to December 2018

Since the last update, things were actually doing pretty nicely until the end of 2018. My CPM rose from the 40-cent mark up to 64 cents on November 2018. My last payout came to me after the end of the year, netting $194.42 for 2018 overall, and $96.66 for this specific time period. The CPM (earnings per 1000 ad views) was an average 55 cents for the entire year, and 53.5 cents for this specific time period.

January 2019 to September 2019

After ending 2018 on a high note, I had good feelings about how well 2019 was going to pay out. I was wrong – it’s worse than it’s ever been before. My CPM has fallen to less than 30 cents, even despite a few popular posts like adding PCIe to the Atomic Pi, or fixing a liquid-damaged MacBook Pro.

This year to date I’ve earned $97.56, with a total ad view count of 397,780. If I multiplied this by last year’s average CPM, I should have earned $218.78 – talk about a massive cutback! This is coupled with the fact that the majority of the ads that do get served are the low-quality chumbox kind, like the infamous “gut doctor” ad that would repeat itself over a dozen times within a few pages’ worth of blog posts.

At the same time, I guess I shouldn’t complain too much about my earnings. Another WordPress blogger earned 18 cents in March 2019. Even if I’m not breaking even on my hosting fees, it still helps.

Conclusion

Simply put, 2019 has not been a particularly prosperous year for WordAds revenue – or at least in my case it isn’t. Hopefully the rest of 2019 turns out better, and likewise for 2020…

 

Hacking into Windows CE (and Doom) on the Magellan RoadMate 1412 GPS receiver

As seen on Hackaday!

Oh no, I’ve done it again haven’t I?

TL;DR: Just because a device runs Windows CE doesn’t mean it’ll just run Doom without much work.

About a week ago I was perusing some local garage sales, and stumbled upon an old GPS receiver – the Magellan RoadMate 1412. Magellan units are known to run Windows CE, so naturally I had to purchase it and do what needed to be done: run Doom on it. After shelling out $10 CAD for the receiver and its slightly-damaged car charging cord (no mounting bracket, though), it was time to take it home, clean it up, and send it to its Doom… (heh, get it? … I’ll see myself out.)

The Magellan RoadMate 1412 running Doom!

The Magellan RoadMate 1412 running Doom! It wasn’t nearly as easy as I first thought…

Considering that the device runs Windows CE 5.0, I figured that running Doom on it would be a piece of cake once I gain access to the underlying operating system. Should be pretty straightforward… right?

Wrong! Feel free to join me in my adventures in running Doom on a (perhaps excessively) neutered stripped-down Windows CE device.

Stage 1: Power-Up

Turning on the unit wasn’t particularly exciting, although I expected the unit to not work until the internal battery received at least some charge. However, within seconds of me connecting the car charger to it, it powered on with the Magellan logo and a progress bar, before transitioning to the boot logo and the usual “don’t crash your car” warnings that GPS units tend to display on power-up.

 

 

 

Unsurprisingly, the unit still works as a GPS unit, and there’s no immediately apparent way to exit the navigation app and slip out into Windows CE itself. However, after some searching, I found out that plugging in the unit causes it to appear on a computer as a USB drive, with the navigation app’s files all visible as a FAT32-formatted 2-gigabyte volume.

 

 

 

Stage 2: Not-so-Total Command

The navigation app’s files are stored in the “APP” directory, with the executables “Navigator.exe” and “mgnShell.exe” catching my eye in particular. I renamed the Navigator app to “MagNavigator.exe” as to allow a relatively easy revert in case things go wrong, and I copied TotalCommander/CE to replace the original navigation app. Rebooting the receiver didn’t appear to change anything until I dismissed the initial legal warning. Once I closed the notice, I was sent right into TotalCommander/CE, but with no taskbar in sight.

 

 

 

One strange thing I noticed was a distinct lack of icons in the folder list. Scrolling through the “\Windows\” directory revealed some rather disappointing revelations: we have no command prompt, on-screen keyboard, or Explorer shell (therefore, no desktop, Start menu or taskbar).

This meant that running Doom wouldn’t be nearly as easy as it was when I got it running on my Keysight DSOX1102G oscilloscope, as that device still had a (mostly) full Explorer shell and command prompt. Windows .lnk shortcuts failed to work either, causing TotalCommander to just redirect to the system root directory – and forget trying to use batch files! The Explorer components are so absent, third-party apps can’t even display Open or Save dialogs…

This presented a significant roadblock: Chocolate Doom for Windows CE requires command-line arguments. How am I supposed to run this with no Explorer shell or command prompt?!

Stage 3: Getting (Mort)Scripty

Without access to native Windows CE tools, I needed to find a third-party solution to run programs with command-line arguments – maybe some sort of scripting engine.

Enter MortScript. It’s a lightweight yet powerful scripting engine, with a Visual Basic-like syntax. It’s instrumental in making GPS mods like MioPocket possible. (On that note, before this point I wasn’t even aware that MioPocket even existed; I learned that it provides a lot of Windows CE functionality that’s otherwise absent on GPS units like mine, but I’ve come this far – I was determined to forge my own path to Doom.)

 

 

 

I tinkered with the example command syntax and used its Run() command to send the correct command-line arguments for Chocolate Doom: “chocolate-doom.exe -iwad [wad path]“. After running MortScript for the first time to register the .mscr file extension, I was ready to make my script:

# LaunchDoom1.mscr
#########################################
# Chocolate Doom for Windows CE Startup Script by Jason Gin
# Visit: https://ripitapart.com
# Version 1.0 (initial release)
#########################################

# Resource Paths (WAD and executable)
DoomWadPath = "\SDMMC\Fun\chocolate-doom-1.3.0-wince\Doom1.wad"
DoomExePath = "\SDMMC\Fun\chocolate-doom-1.3.0-wince\chocolate-doom.exe"

#########################################

# Step 1: Create required command-line arguments to start Doom:
DoomExeArgs = "-iwad " & DoomWadPath

# Step 2: Run Doom!
# Error checking is performed on whether the WAD file path is valid.
# This assumes that the executable path is valid.
# If it isn't, then nothing happens anyway.

If (FileExists(DoomWadPath))
Run(DoomExePath,DoomExeArgs)
Else
Message ("Error: Unable to find the WAD file at " & DoomWadPath & "!")
EndIf

Stage 4: Doom’d!

After running the script, I declared success: it runs Doom! The window was limited to its lowest supported resolution of 256×200, but running “chocolate-setup.exe” I was able to set it to 320×240 – not great, not terrible. Oh, and it also runs Duke Nukem 3D.

 

 

 

Unlike my previous Doom hack, I wanted to make the window fill the screen but not be in full-screen mode (without any physical buttons, I would otherwise be unable to regain control of the operating system). After some tinkering with the configuration files stored in “\My Documents\.chocolate-doom\chocolate-doom.cfg“, I changed the following settings:

autoadjust_video_settings     0
fullscreen                    0
aspect_ratio_correct          1
screen_width                  480
screen_height                 247

With this adjustment saved, I was able to get a more immersive Doom experience on my GPS receiver.

The Magellan RoadMate 1412 running Doom.

Magellan RoadMate 1412 Runs Doom

Extras

I wanted to see if I could get an external keyboard working on the receiver so I can actually play Doom instead of just letting the on-screen demo do its thing, so I rigged up a mini-USB On-The-Go (USB OTG) cord with a homemade USB B-to-A adapter so I can plug in USB peripherals to the charging port. Unfortunately, it seems that the receiver only supports USB drives, and refuses to work with USB input devices (like mice and keyboards); it doesn’t even work with USB hubs! Oh well, it was worth a try.

 

 

 

Conclusion

As is with many things in life, projects are often much more complicated than it might seem. Just because a certain device runs Windows CE doesn’t mean that all the of the expected components are actually available (after all, it is meant to be a highly modular embedded operating system). However, with the right tools, it can be done – and it keeps the adage of “It Runs Doom!” alive.

Unboxing and review of SanDisk 64GB microSDXC High Endurance Card

UPDATE (July 19, 2020): I’ve analyzed a 128GB version of the High Endurance card, and it appears that SanDisk is using 3D TLC Flash.

Dashcams: they can be a crucial tool when reconstructing events in a vehicular incident, or a source of entertainment when watching compilations on YouTube. Like any modern device, they generally use SD or microSD cards as their storage medium. However, not all cards are created equal.

Cheaper cards, like SanDisk’s Ultra lineup, use cheaper TLC (triple-level cell) NAND Flash that is ill-suited to the harsh working conditions of a dashcam. Not only does the card have to endure temperature extremes, the constant writes can burn through the Flash’s write cycles in short order. In fact, SanDisk specifically denounces this line of cards for use in continuous-recording applications.

The solution: high-endurance memory cards! These cards (at least in theory) use more durable MLC or even SLC NAND Flash, which can take many more write cycles. I purchased the 64-gigabyte model, the SDSQQNR-064G-G46A.

Unboxing

The card’s packaging isn’t much different than SanDisk’s typical microSD card offerings. The paper-and-plastic package includes a small blister pack that holds the microSD card itself and the full-size SD card adapter, without a carrying case (granted, the memory card is expected to stay inside the dashcam for most of its working life).

The packaging also includes a license key for a 1-year subscription to the RescuePRO data recovery software (although in all honesty, you’d be better off using the free PhotoRec software instead).

Endurance Rating

SanDisk’s lineup of high-endurance memory cards are designed for use in very write-intensive workloads, such as constant video recording.

Unfortunately, the endurance specifications for these cards are (probably intentionally) vague, only providing a set number of hours of video recording. However, we can infer a rating with a little bit of math.

SanDisk’s card packaging defines Full HD video to be 26 Mbps, which is equivalent to 3.25 (binary) megabytes per second. This equates to 11,700 megabytes per hour, or 11.426 gigabytes per hour. With a rating of 5,000 hours at this data rate, we get a specified endurance of 57,128.91 gigabytes written, or 55.79 terabytes written (TBW).

Memory cards, like other block-based storage media, often define capacities with decimal prefixes, whereas computers usually binary. A “64-gigabyte” card is really 59.605 binary gigabytes (“gibibytes“) in capacity, but in this blog post I’m using the Windows notation of gigabytes; that is, calculating in binary but displaying as decimal. 😛

Therefore, we get a final calculated P/E (program-erase) cycle count of… 936 cycles. This is more in line with traditional 2D TLC NAND Flash, so I suspect that this rating is either based on different bitrates, or SanDisk is being really, really conservative in their estimates – or heck, maybe this really is just TLC NAND Flash that’s being configured and/or warrantied differently by SanDisk. As much as I am tempted to remove the epoxy coating that covers the manufacturing test pads in order to get a NAND Flash signature directly, I like having a warranty for at least a few years. Maybe I’ll buy another card to try this on…

UPDATE (July 19, 2020): I got my paws on another card (the 128GB variant), and did some reverse-engineering work on it to determine what Flash SanDisk used. It turns out they used 3D TLC NAND, which is still quite durable and reliable due to the use of larger process geometries. The Flash should easily withstand thousands of write cycles without much issue.

Card Information

Using an older laptop with a true SD-compliant slot (most newer ones are just USB card readers internally), I was able to grab the card’s metadata from Linux. These information files are found in /sys/block/mmcblkX/device, where X is usually 0 depending on your host machine. Android used to be able to do this as well, but nowadays it’s not possible without a rooted operating system.

Item Value
CID (Card ID) 035744534836344780ed1bbb9e013100
CSD (Card Specific Data) 400e0032db790001dbd37f800a404000
Manufacturer ID 0x03 (SanDisk)
Manufacture Date January 2019
Device Name SH64G
Firmware Version 0x0
Hardware Revision 0x8

Initial Formatting

The card is formatted as exFAT, with a 16 MB offset (that is, the first 16 MB of the card is unallocated), with an allocation unit size of 128 kilobytes. It uses a very basic MBR (Master Boot Record) partition structure, with the first sector being the bare minimum to be recognized as a valid structure.

Performance

Now that I’ve probably irked some of my readers with my usage of decimal and binary prefixes, it’s time to see how fast this card can go. SanDisk’s own ratings for the card are very brief, citing a sequential read/write speed of 100 and 40 MB/s respectively. It is rated for the V30 Video speed classification, which guarantees a minimum of 30 MB/s sequential writes continuously.

All the tests below were performed on my desktop computer using a FCR-HS4 USB 3.0 reader from Kingston, which is based on the Realtek RTS5321 chipset.

CrystalDiskMark

CrystalDiskMark is the de-facto standard for storage benchmarks. I’m using the 64-bit edition of CDM, version 5.2.0.

I/O Type Read Write
Sequential QD32 91.80 MB/s 60.56 MB/s
Sequential 93.33 MB/s 61.66 MB/s
4K Random QD32 8.319 MB/s
2129.7 IOPS
4.004 MB/s
1025.0 IOPS
4K Random 8.121 MB/s
2079.0 IOPS
3.971 MB/s
1016.6 IOPS

The sequential I/O speeds are on par with a modern microSDXC card, and the IOPS aren’t too shabby either; they exceed the IOPS requirements for the A1 performance class which requires R/W IOPS of 1500 and 500 respectively. This could make this type of card a viable option for other write-heavy environments – this includes single-board computers (SBCs) like the Raspberry Pi, where memory card failures due to excessive writes are common.

ATTO Disk Benchmark

The card’s read/write performance levels off at around the 64-kilobyte mark during testing, showing that operations smaller than this incur a significant performance penalty. This may also be indicative of the internal page and block sizes of the NAND Flash itself.

Hard Disk Sentinel

Hard Disk Sentinel comes with a bunch of disk benchmarking tools, including some to test the entire “surface” of a drive. I used the software’s Surface Test tool to measure the card’s performance before and after filling the drive with data – first with random data, then with all zeroes.

Random Seek Test

The Random Seek Test measures the card’s latency when performing random “seeks”, although more accurately it reads a single sector from a random location.

State Average Latency Minimum Latency Maximum Latency
Empty/Initial (0x00) 360 µs 350 µs 420 µs
Random Fill 600 µs 590 µs 670 µs
Zero Fill 600 µs 590 µs 690 µs

The card initially had about 420 microseconds of latency, but after filling the card with random data, this increased to 670 microseconds. Filling the card with all zeroes again did not improve performance, and his isn’t helped by the fact that SD cards generally lack the ability to “TRIM” unused sectors like SSDs or eMMC chips.

Full Surface Read (or at least an attempt)

This is where things get a bit… interesting. It was around this time that I noticed some performance inconsistencies that didn’t show up on other benchmarks. Although the I/O speeds largely matched what my other benchmarks revealed, I noticed frequent dips below normal, often down to the mid-20 MB/s range! I wasn’t sure that this was necessarily the card’s fault (pauses in read/writes could result in performance degradation on a device if it can’t buffer the writes well enough), or if my card reader/operating system/etc. was responsible.

I decided to hold off on publishing the sequential write test until I get this issue figured out – perhaps it’s worthy of a blog post all on its own…

Resurrecting a dead MacBook Pro (mid-2012 13-inch, model A1278)

As seen on Hackaday!

A couple weeks ago, I picked up a dead MacBook Pro that was on its way to the recycle bin, and was curious as to whether I would be able to fix it. It had a note attached to it citing several issues with the computer: the display doesn’t work, the battery doesn’t charge, one of the USB ports doesn’t work, and it won’t load an operating system. It certainly didn’t look particularly promising, but I felt it would be a good way to test my skills in component-level repair – with a pretty nice prize if I succeeded.

Triage

The computer I picked up is a mid-2012 MacBook Pro by Apple; it is the A1278 model with a logic board number of 820-3115-B, and it comes with an i7-3520M CPU and 8 GB of DDR3 RAM – however, the hard drive was taken out of the computer by the time I received it. As previously noted, the computer had a laundry list of issues that were certainly the reason the original owner decided to discard their computer – a laptop that doesn’t boot nor have a display isn’t a particularly useful one.

Connecting a MagSafe AC adapter to the computer revealed even more issues: even though the unit was already noted that it wouldn’t charge, I noticed there was no LED indicator on the power adapter’s plug, and the computer wouldn’t power on, even with external power connected; the only sign of life was one of the LED level indicators rapidly flashing when I pressed the button. With this functionality test being unsuccessful, I decided to open up the computer to see what else was wrong…

Troubleshooting & Diagnosis

Unscrewing the bottom cover revealed what horrors the computer had experienced. There was clear evidence that it had suffered from liquid damage: rampant corrosion around the LCD connector and some of the power circuitry, and some of the corrosion deposits were even left on the computer’s bottom cover! If you watch Louis Rossmann’s videos, you would know that liquid damage rarely is an easy fix, especially when high-voltage LED backlight circuitry gets involved.

Liberal use of a 70% isopropyl alcohol solution and a brush was able to scrub away all the corrosion on the computer’s logic board, and the results were not pretty:

Many PCB test pads were either corroded or entirely gone, the backlight fuse (and its pads) were nowhere to be found, and some ICs were missing entire pins! Whatever was spilled on this area of the MacBook certainly had some corrosive properties to it, and it looks like nothing was done to stop the initial damage. With a schematic and board-view software in hand, it was time to investigate what particular components had suffered damage.

Power Supply

Before any device can perform any useful functions, it needs power. I reconnected the AC adapter and started to check the voltages around the DC input jack and its surrounding support circuitry. Since I was able to press the MacBook’s battery indicator and get some response when connected to power, I knew that the main input fuse was intact, and that the SMC (System Management Controller) chip was receiving power via the PP3V42_G3H rail and functional; the G3H (G3-Hot) designation means that the power rail is always on, even if the computer is otherwise turned off. I checked the voltage at the DC jack’s ADAPTER_SENSE line, which is normally at approximately 3.3 volts and uses the 1-Wire protocol to communicate with the power adapter and control the LED on the power adapter’s MagSafe plug. To my surprise, it was at a staggering 16 volts, which meant that something was shorting the DC input voltage (about 16.5 volts) onto a low-voltage communication line – no wonder there wasn’t any LED indicator when I plugged it in! A multimeter measurement found about 2 kOhms of resistance from the power line to the communication line. Thankfully the MacBook’s logic board features a MAX9940 1-Wire overvoltage protection chip, which is rated to protect against voltages as high as 30 volts. I scavenged another DC input connector from an older, dead MacBook which shared the same connector and pinout. After connecting this to the logic board, I found that I got a green LED upon connecting the AC adapter, and the CPU fan started spinning; this is a very good sign as this means the main power circuitry is intact. Measuring the CPU’s Vcore voltage revealed a voltage of about 0.8 volts, which is normal for a modern laptop CPU. With the “heart” of the computer checked out, it was time to focus on the area most affected by the water damage.

LCD & Backlight

Examining the backlight connector and its surrounding circuitry revealed significant damage to many components and the PCB itself. The power supply pins on the LCD connector showed a significant amount of corrosion, and I was concerned that the backlight’s output voltage (up to 52 volts!) could have made its way through all the corrosion residue and damaged critical data lines between the display and the graphics controller. I noticed the backlight fuse (F9700, a 3-amp 0603-size fuse) had gone more than just open-circuit – I couldn’t even find the fuse or its corresponding PCB pads initially! I then probed the LCD connector and found that the display’s 3.3-volt power lines were open-circuit; the corrosion had eaten through the traces between the connector and its decoupling capacitors nearby. Using a diode-mode measurement on the FPD-Link (often called LVDS) lines revealed that the connections were intact; there weren’t any anomalous readings or short-circuits on those lines, the DDC (Display Data Channel) lines, and the 3.3-volt power lines.

Due to the high voltages used to drive LED backlights, I had my suspicions on U9701 (a Texas Instruments LP8550 LED backlight driver). It’s a tiny ball-grid array (BGA) package, and attempts to clean the chip from its edges didn’t seem to do much. Its corrosion looked limited at first – only the feedback line’s probe point was lost – but I was sure the chip was on its last legs (or is it balls?).

Power Management

The LCD connector is in close proximity to the computer’s DC input and its “PPBUS_G3Hot” power rail, which is always on (even if the computer is otherwise turned off), which exacerbates any corrosion due to liquid damage due to its high voltage. Further examination revealed significant corrosion on the outside of the CPU’s high-side current sense resistor (R5400), and the current-measurement pins (pins 4 and 5) on U5400 (a Texas Instruments INA213 current-sense amplifier) were completely gone! Clearly there was no way to salvage that component.

There was significant damage to the SMC’s DC input voltage sense circuitry (“VD0R”), with pins 3 and 4 of Q5490 (an ON Semiconductor NTUD3169CZ complementary pair of N-channel and P-channel MOSFETs) being completely eroded away, much like U5400’s current-sense pins; this part of the circuit uses a P-channel MOSFET to switch on a resistive voltage divider, allowing the SMC to measure what the voltage is on its MagSafe input connector. Also, many of the probe points related to that circuit were also completely eroded, revealing dark pits instead of silver-plated copper pads.

FireWire

The FireWire circuit wasn’t spared from the carnage, either. Pins 3 and 4 on Q4262 (a Diodes Incorporated BSS8042DW complementary MOSFET pair) were also severely damaged; these pins are used to quickly disable the FireWire power output transistor (Q4260, an ON Semiconductor FDC638P P-channel MOSFET) in case of a “Late-VG event“. This occurs if the ground pins of the FireWire connector are mated too late when plugging in a device – this creates a dangerous overvoltage condition on the FireWire data lines, as up to 30 volts briefly find a return path through the data lines, risking damage to the device and host controller. I wasn’t as concerned with this circuit, as I don’t have any FireWire peripherals, and the circuit in its current state simply means the FireWire port will be unable to disconnect power if a bad cord is plugged in.

Thunderbolt

The area that had the least liquid damage was C3897, which belonged to U3890 (a Linear Technologies LT3957, a 15-volt boost converter for the MacBook’s T29 chip and Thunderbolt interface). All this area needed was a bit of corrosion cleanup.

USB Port

During the functionality tests, I noticed the metal casings of the USB port were getting very hot to the touch, and I nearly burned myself on U4600 (a Texas Instruments TPS2561, a dual-channel load switch with internal current limiting)! I found a short-to-ground problem on a power line on one of the USB ports, which explains the symptom listed on the note. I desoldered the chip, initially thinking the issue was in the chip itself, but the fault remained. I narrowed the problem down to C4695, a 10-microfarad ceramic capacitor that had short-circuited internally; this caused the TPS2561 to go into current-limiting mode, which turns the chip into a resistor and dissipate copious amounts of heat into the PCB, which made its way to the USB ports (and then my fingers – ouch).

Hard Drive Cable

During the repair process, I was able to install Mac OS X Lion to a SATA SSD, but soon found the MacBook unable to recognize SSDs, despite hard disk drives showing up just fine! As it turns out, the A1278 is notorious for bad HDD cables, with even replacements failing within months of installation. This appeared to be caused by chronic frictional damage, as the cable is sandwiched between the hard drive and the MacBook’s rough aluminum casing – even regular use of the laptop was found to create hairline cracks in the cable. Thankfully replacements are relatively inexpensive, and a little bit of Kapton tape as a barrier against the casing was the “vaccine” against future cable failures.

Repairs

With all of the problems written down, it was time to start fixing up the MacBook. Time to break out the hot air rework station, soldering iron, solder, magnet wire, and plenty of flux!

DC Input Jack

I desoldered the DC input jack, and found there was a lot of corrosion residue bridging the +16.5-volt power line to the ADAPTER_SENSE 1-Wire communication line.

With some isopropyl alcohol and some scrubbing with a small brush, I was able to clean up the corrosion and resoldered the jack into place. A quick multimeter test found that there was no more 2 kOhms of resistance from the power to the data line, and I was able to get an LED indication when I plugged in the AC adapter, including an orange light that indicates the battery is charging.

LCD Connector

I wanted to determine if the display was still functional, so I first focused my attention to the LCD connector, even if I had to eschew the LED backlight for a bit.

I ran a jumper wire from L9004 to pins 2 and 3 of the LCD connector; this belongs to PP3V3_LCDVDD_SW_F, which provides the 3.3-volt power to run the LCD panel except the backlight. After cleaning out the flux and corrosion on the logic board’s connector as well as the LCD cable, I was able to get an image on the display!

USB Port

With the faulty component identified, I replaced C4695 with an identically-rated 10-microfarad 6.3-volt X5R ceramic capacitor in an 0603-sized package. After replacing the capacitor, the USB port was fully functional again!

Current-Sense Amplifier

After ordering both the INA213 and LP8550 from Texas Instruments, it was only a few days before they arrived in the mail. I desoldered the dead chip from the logic board, cleaned up the pads with some flux and desoldering braid, and installed the new chip. Running Apple Service Diagnostic tools showed that the current-sensing circuit was working correctly.

DC Input Voltage Divider Switch

I didn’t want to buy another transistor pair for Q5490, so I replaced the P-channel half with an ON Semiconductor NTK3142P P-channel MOSFET that I salvaged from an older donor MacBook logic board. I scraped away some solder mask on one of the broken traces heading to the SMC’s voltage divider so I could solder the transistor’s drain terminal to it, and used magnet wire to connect the transistor’s gate and source to their corresponding locations across R5491. R5494, leading to PM_SUS_EN, was found to have a 0-ohm resistor that was open-circuit; this was easily bypassed with a wire jumper across the resistor’s original pads. After cleaning off the flux and performing continuity measurements, I measured the voltage at the SMC’s voltage divider resistors and got a valid voltage reading when I plugged in the AC adapter.

LED Backlight Driver

The LP8550 was up next for repair. I took a 2-amp 0603-sized fuse from a dead hard drive, and used some magnet wire to reattach it to the remnants of F9700, which was a 3-amp fuse originally; note that it’s far safer to use a fuse of a smaller rating instead of a larger one, should a circuit fault still exist.

Tracing the other lines to the LP8550 revealed that R9731 (leading to PPBUS_SW_LCDBKLT_PWR) was open-circuit at a via, which was easily bridged with some solder and magnet wire. R9010 (leading to PPBUS_SW_BKL) was open as well.

After reinstalling the fuse, I actually got the backlight working! However, upon a power cycle I heard a snap, saw a puff of smoke, and lost the original backlight chip. Chances are there was indeed some corrosion residue had caused 50-odd volts to end up on a more sensitive pin on the LP8550. I used an Xacto knife to lightly scratch an outline around the chip, then used copious amounts of flux and desoldered the dead chip with my hot air rework station; I also removed the fuse to help in further troubleshooting to ensure that there weren’t any short-circuits to ground on the backlight circuit. I cleaned up the area with leaded solder and some solder wick, and cleaned up the residual flux in anticipation of the new chip’s installation.

The chip was remarkably easy to install – just get the A1 ball lined up according to the board view, and heat the board to the right temperature. After thoroughly cleaning away the flux from the area, I turned on the MacBook… and let there be (back)light! I power-cycled the computer and the LED backlight remained functional! (And for the record, the fuse didn’t even blow during the entire ordeal.)

FireWire Late-VG Protection Circuit

I considered this issue to be a “WONTFIX, as I had no use for FireWire connectivity (nor do I have the correct FireWire 800 cables anyway). If I want to sell this computer, I might install a P-channel MOSFET to replace Q4262 (see the LCD Connector section above) in a similar fashion to the DC input voltage-sensing circuitry.

UPDATE (February 26, 2020): Actually, several months ago I decided to finish things once and for all, so I replaced the damaged P-channel MOSFET circuitry. Do I have much use for FireWire? Not really, but it at least means I can experiment with it or something later on, if desired.

Testing

It takes a little bit of Google-Fu, but with the help of a BitTorrent client, I downloaded the disk images to create an Apple Service Diagnostic (ASD) drive. This is far more sophisticated than the built-in diagnostic when you boot the computer while holding down the D key. With ASD, one has the option to use a stripped-down version of Mac OS X – in a similar vein as WinPE – or a very lightweight UEFI (Universal Extensible Firmware Interface) environment that looks very much like Mac OS 9 and earlier.

It took over half an hour, but all the tests passed without a problem, since all the sensor readings were valid. My MacBook Pro has been restored to working order! I installed Mac OS X High Sierra to a 1TB SSD, and used Boot Camp to run Windows 10 Pro as the default operating system (what can I say, I like Windows 🙂 ). The Mac Precision Touchpad driver project makes the touchpad a pleasure to use, as the built-in Boot Camp driver provides a much less-comfortable experience.

Conclusion

Much like solving a puzzle, component-level troubleshooting of modern electronics is possible, but this is only feasible if the relevant documentation exists as a good reference point. One can do without them, but the act of reverse engineering isn’t easy if one only has a non-working device.

With the help of a schematic and board view (including the open-source software OpenBoardView), one can easily find what circuits a component belongs to, and where it goes. By following the connections, one can track down the problem(s) with the board, and hopefully save a device from an untimely end in the landfill or a recycling facility.

Right to Repair

This project is an example of why I believe in the right to repair. If I didn’t have (even unofficial) access to schematics, board views, and diagnostic software, I wouldn’t have been able to bring this dead MacBook Pro back to life. However, with a little bit of electronics troubleshooting knowledge and skill, I was delighted that I diverted a discarded dysfunctional device from a demise in the dumpster. In fact, this blog post was written from the MacBook I just repaired!

Atomic Pi Adventures, Episode 1: Adding external PCI Express expansion by removing onboard Ethernet

As seen on Hackaday!

TL;DR: The Atomic Pi single-board computer CAN be expanded through PCIe. It’s just a massive pain to do so, even if you have steady hands. Let’s just say it’s a long story…

DISCLAIMER: The modification performed in this blog post can, and has, caused permanent hardware damage to my Atomic Pi, albeit repairable with much skill and effort. Reenacting what I’ve done requires significant experience with SMT (surface-mount technology) components, some barely larger than a grain of sand (I consider 0402-size components to be “oversize” in this instance). I accept no responsibility for damages arising from attempting this modification.

Introduction

Single-board computers (SBCs) are all the rage nowadays, with the Raspberry Pi being the most well-known in this category. SBCs are compact computers, carrying their own CPU and memory, and usually some on-board storage and various I/O connections (e.g. USB, HDMI, Ethernet). Most of these computers use the ARM architecture, found on almost all mobile devices today. However, some use the x86 architecture, which is used in higher-end tablets, laptops and desktop computers.

Recently, the Atomic Pi made waves in the electronics hobbyist space, boasting an Intel Atom Z8350 quad-core CPU with 2 GB of RAM, 16GB of eMMC storage, Gigabit Ethernet, Wi-Fi, USB 3.0, built-in speaker amplifiers, and lots of general-purpose I/O (GPIO) pins – all for less than $40 USD!

As one might expect, there were caveats to this little computer, with some dismissing it very harshly, if not unfairly. With some investigative work, members of the community found out that the “Atomic Pi” board actually belonged to the Kuri robot from Mayfield Robotics; the company shut down in late 2018, and the liquidated stock of these SBCs were snatched up by Digital Loggers with the help of a Kickstarter campaign, who then developed breakout boards to make using them easier. This is because – unlike almost every other SBC – you can’t just plug in a barrel jack or USB cord to power it! Instead, it uses a 2×13 pin header, which many users did not have on hand, nor had the skill and/or resources to build their own solution. This is compounded by the board’s need for clean, well-regulated 5 volts at 2-4 amps, with 12 volts being optional to power the onboard speaker amplifier. The 16 gigabytes of eMMC storage proved to be too cramped to run Windows 10 directly, and the Realtek RTL8111G Gigabit Ethernet chipset is often frowned upon by those in the pfSense (a free firewall/router OS) community.

The NIC’s usage of the Z8350’s single PCI Express (PCIe) lane is what caught my attention. Unfortunately, the RTL8111G Ethernet chip is soldered to the board, with no easy method to replace it with an external card. A few people have attempted to remove the chip and wire in an external PCIe riser, without success. With my previous experience with fine-pitch electronics work, I figured that this would be a fairly easy modification to make.

It wasn’t. In fact, this was one of the most frustrating electronics projects I’ve done to date – and now you get to come with me in my adventures (and misadventures) in microsoldering on the Atomic Pi (or was it Kuri? The jury seems to be out on the nomenclature).

Optional Reading: PCI Express Signals

PCI Express (or PCIe for short) is a very common high-speed interface for connecting processors or chipsets to all sorts of different peripherals. It uses low-voltage differential signaling to minimize interference, and a single lane of PCIe can carry 250 MB/s for the first generation, all the way up to 2 GB/s for the fourth generation!

A single PCIe lane is made up of three differential pairs: receive, transmit, and a 100 MHz reference clock. Control signals for waking up and resetting peripherals are also provided. Riser cards, often used for cryptocurrency mining, use these five signals to provide the minimum connectivity for any PCIe peripheral. The interface is highly flexible, allowing graphics cards that normally use 16 lanes to run on just one lane. The specifications for PCIe make board design easier, as the differential lines are designed to adapt to different lane configurations; the polarities in a pair (transmit, receive, clock) are polarity independent (all that matters is that you maintain a good differential pair).

PCIe Signal Pinout

Since the Atomic Pi lacks a PCIe slot of any kind, I have provided the diagram indicating which signals go to what points on the board:

Note that the AC coupling capacitors are required for the peripheral Rx (receive) pair, and are optional for the peripheral Tx (transmit) pair.

Attempt 1: “Chipple” Bypass

I started this project with the intent of making my modifications as reversible as possible. Looking at the RTL8111E’s datasheet (a similar model to the 8111G) revealed the presence of an “ISOLATEB” pin; grounding this pin causes the chip to disable itself and release its hold on the PCI Express lines.

This brings us to the first roadblock: the Ethernet chip is soldered to the board, and the components that connect it are known as 0201 (0.002 inches in length, 0.001 inches in width), smaller than a breadcrumb!

After disabling the Ethernet chip, I used my trusty 0.1mm (that’s four-thousandths of an inch!) magnet wire to tap into the PCIe lines, and brought them out to a PCIe riser card, which provided a USB 3.0-type socket for an external connection.

Unfortunately, high-speed signals aren’t just about wiring from one device to the next, and this was no exception. The rat’s nest of wires were no suitable medium for the 5-gigabit signals to traverse (PCIe requires very tight control of electrical trace layouts), and the Atomic Pi was unable to detect the presence of any device on the external PCIe port.

Attempt 2: Thin Twinax Troubles

With the signaling issue in mind, I tried some very thin twinaxial cables from a dead MacBook’s hard drive cable to connect the PCIe lines to my riser card. This cable is very thin, with an outer thickness similar to fishing line. It uses a foil and wire “shield” around the two internal wires to protect it from external interference. I figured that this should help protect the delicate signals from the harsh outside world.

I didn’t have very much of this thin twinax cable on hand (that said, if anyone knows where I can buy this stuff in bulk, please let me know!), so I was limited to very short lengths for each pair. I ran out of twinax after the transmit and receive pairs, and resorted to using two coaxial cables from that same cable bundle.

After fiddling with the super-thin wires and soldering them to the header, I still got nowhere and could not get the Atomic Pi to see an external PCIe card.

Removing my modifications revealed my first blunder: despite placing the chip into a temporarily disabled state, the Ethernet chip was dead! This meant that I could not use the built-in Ethernet port anymore, and the LEDs next to the Ethernet port simply glowed a dull orange instead of their rapid blinking pattern when data is being transmitted over Ethernet. I figured that there was no use keeping the chip on the board, so I used my hot air rework station to remove it, using a generous amount of heat and flux to get the chip removed.

Attempt 3: To Ribbons, You Say?

With the dead Ethernet chip removed, I decided to use another tactic to bring out the PCIe interface. I decided to use a thin ribbon cable (okay, more accurately a “flat flexible cable”) to help decouple any mechanical stresses from moving the external USB 3.0 connector around during testing, and its compact size allowed me to try using the QFN pads to connect my ribbon cable, hopefully minimizing noise that could be picked up, as well as avoiding any “stubs” of wiring within the differential pair that would degrade the signals.

The challenge is that the chip uses a very small pad spacing, and I wanted to avoid soldering directly to the ribbon cable. I managed to salvage a couple connectors from some other devices, and used a cut-up CompactFlash memory card’s PCB to make it easier to solder the connector, as well as provide a base for the connector to sit on.

The connector on the board was affixed onto the Ethernet chip’s footprint with the help of some double-sided foam tape, and magnet wire was used to bond the PCIe signal wires onto the ribbon cable. The ribbon itself used copper foil on one side to help with interference suppression, and provide the signals a “ground plane” to travel across.

Things weren’t quite as elegant on the other end of the cable, however. I still had to contend with a fine-pitch ribbon connector, and a way to connect that to a male USB 3.0 plug to hook it up to the PCIe riser. I had little option except to use the small lengths of twinax from the last attempt, and (perhaps unsurprisingly) it didn’t work.

Attempt 4: Teeny Tiny Twists

The next attempt went to a smaller scale, using a very fine twisted pair I created out of my 40 AWG magnet wire (each wire is as thick as a hair). I then shielded this magnet wire by sandwiching it between pieces of copper tape to act as a ground plane for the signals. I reused a dead USB 3.0 drive to act as the plug for the PCIe riser, and wired it straight to the QFN pads of the original Ethernet chip.

Although the twisted pair would have, in theory, reduced intra-pair skew and some degree of EMI resistance thanks to the copper tape, the homemade twisted pair almost certainly would not have provided a 100-ohm differential impedance that’s required for PCIe signaling. Once again, the attempt to break out PCIe from the Atomic Pi was a failure.

Attempt 5: SASsiness Yields Results…?

Since the central issue with external PCIe connectivity involved the connection between the Atomic Pi board and the riser, I figured I would try a medium that was specifically intended for high-speed differential signals. I bought some thin SATA cables (specifically the 18-inch thin cables on Amazon), which used two pairs of thin SAS cable per SATA connection. The difference between SATA, SAS and PCIe is of little difference here, as the key criteria was a differential pair with 100-ohm impedance and ability to carry high-frequency signals.

Despite being a thin cable (each conductor is only 30 AWG, or 0.25 mm in diameter), these cables were too stiff to directly connect to the AC coupling capacitors without a very high risk of damaging the capacitor and/or the pad it is soldered to. I had to reinforce the cables by soldering them down and hot gluing the cables to the board at multiple points to relieve stress on the connections.

The original 0201-sized AC coupling capacitors were removed, and more reasonably-sized 0402 capacitors were used in their place. Each capacitor was a common 100nF capacitance, and were easy to harvest from some dead laptop motherboards I had on hand. I “flipped” the layout of the capacitors away from the QFN footprint, but this meant that only one side of the capacitor was actually anchored to the board; this made the capacitors very fragile and I often lost the terminations on the capacitors (thankfully they are plentiful in electronic devices like this).

To help minimize stress on the vulnerable capacitors, I used magnet wire as a flexible “bridge” between the capacitor and the SAS cable, and the cables were tied down to solder tab wire that I used as “bus bars” for a strong electrical ground as well as a tie-down point to take away most of the strain of the cable’s flexion. Despite negatively affecting signal integrity, I was able to get a sufficiently stable connection to perform some initial testing, and I succeeded! I was able to connect an Intel 82576-based dual-port network card to my external riser.

However, this didn’t last long. The simple movement of the wires on the board when trying to connect different peripherals was enough to break the AC coupling capacitors, despite the use of my flexible terminations from the SAS cable to the capacitors. Replacing the damaged capacitors and redoing the magnet wire terminations failed to restore connectivity, so I desoldered all of the existing connections and started over.

Attempt 6: We Have Lift-Off! (that’s bad)

In an attempt to further improve signal integrity, I decided to take the bold move of eschewing the flexible terminations of magnet wire, and decided to directly solder the SAS cable’s wires to the capacitors, which are soldered to the PCIe pads on the Atomic Pi. I anticipated that physical stresses would damage the capacitors, so I opted to use longer lengths of SAS cable, and to hot-glue the cable to the board at regular intervals to minimize the amount of stress that gets coupled into the cable and capacitors. The 100 MHz reference clock continued to use magnet wire for connection as it was at a much lower frequency than the PCIe signals, and reduced the amount of physical crowding around the PCIe pads.

This leads me to my next issue. To help with structural integrity, I repositioned one of the SAS cables so that it would remain on the board before going to the PCIe riser connector, meaning it would be perpendicular to the other cables. This required me to strip extra foil shielding to jump over the existing capacitors, which increases the risk of physical stress on the capacitors, as well as reduced signal integrity. Additionally, I used a longer set of twinax cables, sacrificing two SATA cables to get most of the original 18 inches of length per cable.

Testing of this construction method failed, and it was only at this point that I realized that two of the four AC coupling capacitors I used weren’t the same capacitance; PCIe usually uses 100 nF capacitors, but I had a 1 nF on one PCIe pair, and 10 nF on the other – no wonder things weren’t communicating (and that’s what I get for assuming the capacitors were the same)! Unfortunately, the process of removing the SAS cable connections resulted in the phenomenon I was trying to avoid: the board’s PERp0 (the positive side of the PCIe receiver) pad had lifted away from the PCB, leaving me with an unsolderable crater.

Attempt 7: Success!

After losing one of the pads for the original coupling capacitors, I was feeling pretty defeated. I didn’t let this stop me, and I decided to apply my magic skills with 40-gauge magnet wire to the trace, and was able to get a sufficient connection with a replacement 100 nF capacitor (and this time I measured it…), hopefully in such a way as to not too negatively affect signal integrity. Unfortunately, the 0201-sized resistors used to connect the PCIe reset and wake signals lost their terminations and no longer took solder on one end; I opted to keep the resistors in place and soldered magnet wire to the other end (facing the SoC rather than the Ethernet chip), as further measurement revealed that the resistors were simply 0-ohm jumpers anyway. I scrapped the longer SAS cables, and went back to the ~8-inch lengths to reduce attenuation.

After rewiring the PCIe data, clock and control lines, I crossed my fingers and retried the riser with my Intel NIC – and it worked! After verifying the connections were sufficiently strong, I decided to make the modification permanent, and covered the area with epoxy to prevent any of the components from breaking loose. Additionally, I used some aluminum sheet metal, double-sided foam tape and some zip ties to create a reinforcement bar, helping to strengthen the cable connections as they leave the board.

With all the connections in place, it was time to get to the fun part: seeing what PCIe peripherals work on the Atomic Pi!

Testing Results

One interesting behaviour of the Atomic Pi when using Windows is that changing PCIe cards often causes the system to immediately power down or freeze during boot. Simply powering the board on again seems to fix this issue. Why am I using Windows? I already upgraded the board to a 64GB eMMC instead of the original 16GB, and the appeal of the Atomic Pi’s x86 architecture was the ability to run desktop apps – in particular, Windows apps.

The UEFI firmware has a hidden advanced menu, accessible if the “shutdown /r /fw” command is run as administrator. There is a section for PCIe settings, and the ones that interested me were the AER (Advanced Error Reporting) and PCIe hot-plug settings. Unfortunately, these do not seem to have any effect as Windows doesn’t seem to pick up any hardware events in the Event Log.

Test 1: Network Devices

PASS: Intel 82576 Dual-Port Adapter

If the port(s) have a connection to another device (e.g. a network switch), the orange and green LEDs will illuminate soon after the PCIe link is properly initialized, which makes for quicker troubleshooting. Even if the system is off, the green link LED remains lit, but goes out once the PCIe bus is reset.

The card works a treat in pfSense and Windows, allowing high-performance Gigabit-speed transfers with both ports in use. However, the UEFI firmware does not recognize the card as a bootable medium (which is for the best in my opinion – the PXE boot program tends to hinder the boot process more than anything).

PASS: Realtek RTL8111GS (External Card)

The PXE network boot program built into the UEFI firmware works just fine, which is expected as the chip is the same type as the original RTL8111G, with the exception that the chip has a built-in switching converter rather than the linear LDO that the Atomic Pi uses natively.

PASS: JMicron JMC250

The card works in Windows and pfSense; the particular card I have has always been flaky in operation, and this was no exception.

PASS: ASUS PCE-N15 (Realtek RTL8192CE) 802.11n WLAN Adapter

Using this card is a performance downgrade compared to the dual-band adapter already present on the Atomic Pi, but it did function correctly in Windows.

Test 2: External Graphics Cards (eGPU)

PASS: EVGA/nVidia 8500 GT

The UEFI firmware doesn’t seem to want to acknowledge the presence of PCIe graphics adapters, which results in issues when testing in Windows. Initially, the card failed to enumerate correctly, identifying as a Microsoft Basic Display Adapter and displaying an error in Device Manager:

This device is not working properly because Windows cannot load the drivers required for this device. (Code 31)

The driver trying to start is not the same as the driver for the POSTed display adapter.

Manually downloading and installing the drivers allowed the card to work properly. As previously mentioned, the UEFI does not recognize the presence of the graphics card, so a monitor that is plugged into the graphics card won’t start working until Windows is loaded and the graphics driver initializes.

Additionally, the adapter’s resources won’t be used if the monitor is connected to the Atomic Pi’s built-in HDMI port – it’s a tradeoff between graphics performance and the ability to see what’s happening on boot; maybe this is different in a multi-monitor configuration, but I didn’t have enough desk space to test this.

PASS: XFX/AMD Radeon 4650

The same driver issues popped up when using this card as well, and the same fix applies.

PASS: ASUS/nVidia GTX 660 Ti

Ditto. This card was the best I had on hand for testing, and amusingly it consumes about 10-20 times the amount of power that the Atomic Pi uses, and is bigger in size as well.

Connecting this card allowed me to run games that would otherwise simply not work, such as Just Cause 2. It performed at about 20-30 frames per second: not great but not terrible. Just Cause 3 was absolutely painful to run, but this is not surprising as the Atomic Pi’s hardware is far below the game’s minimum requirements in almost all aspects.

Test 3: Storage

PASS: Dell PERC H200 SAS Adapter

To my surprise, not only will it work in Windows, it even supports booting from the UEFI! It mentions a SAS controller driver during boot, but I am not sure as to whether this is present in the Atomic Pi’s UEFI, or if it is a driver provided by the SAS adapter itself. The PCIe 2.0 1x interface limits the maximum throughput to less than 500 MB/s, which is slower than a single SATA III connection, and informal testing showed a maximum speed of about 330 MB/s. I have not investigated whether I could configure the hardware RAID features of the adapter from the operating system, but I imagine that doing so would dramatically improve performance when running a RAID 1 (mirrored) setup.

FAIL: OWC (What’s This?) Aura Pro X 480GB NVMe SSD

This SSD was originally meant as an aftermarket SSD upgrade for a MacBook Air, but I bought a PCIe adapter card to use it in a regular PC. I was able to get it to enumerate in Windows, but any sort of read/write action caused it to lock up and throw errors in the Windows Event Log. I suspect this is likely a power supply issue due to how the PCIe riser board provides power (12 volts -> DC/DC converter -> 3.3V LDO linear regulator), or maybe the card just doesn’t want to cooperate.

FAIL: Marvell 88SE9128 2x SATA III + 1x PATA UDMA-133

This adapter caused Windows to almost freeze during boot. Instead, the boot animation would advance by one frame every 2 seconds until the card is removed. I suspect this is the PCIe bus attempting to negotiate a connection but failing, possibly due to signal integrity issues. Checking the Windows Event Log turned up nothing either.

Test 4: … More PCIe?

FAIL: Extender Cable

One would think that a simple straight-through cable would work, right? Nope – it seems like the signal has already been degraded enough after traveling across multiple non-ideal connections, and the extra length in a cable was just enough to degrade the signal beyond recovery.

PASS: PCIe Port Multiplier / ASMedia ASM1184e

UPDATE (July 3, 2019): It seems that the “USB 3.0” pinout on these PCIe risers is inconsistent. I was somehow able to make do by using a PCIe card plugged into my current riser, which then led to the ASM1184e board, which splits the PCIe 2.0 1x lane into four slots. It seems like the chip is able to handle the attenuation over such a long cable length, and it even includes a 100 MHz clock buffer, effectively “refreshing” the signal for more sensitive PCIe cards.

I tried the 88SE9128 card again, but still had no luck. Ethernet cards worked without issue, but I wasn’t able to test the graphics cards as the ASM1184e board’s PCIe slots are closed-ended 1x slots. I could try cutting the edge of the connector to allow for larger cards, but that’s an experiment for another day.

Conclusion

Despite all the roadblocks I encountered in this project (sometimes rage-inducing ones), I still enjoyed this project and the various techniques I tried along the way. The prospect of attaching desktop PCI Express peripherals to a CPU designed for a tablet opens up many different applications for the Atomic Pi, including ones that would otherwise have been less efficient or impossible using the board’s built-in hardware (e.g. dual Gigabit Ethernet without using USB, or GPU-intensive computing workloads).

The high-speed nature of modern computer systems presents many challenges for engineers and modders alike. Multi-gigabit interfaces easily reach into the realm of radio-frequency (RF) electronics, where things like AC transmission line effects and differential signal routing become very important. This issue, coupled with the fact that most modern devices use very small components, makes it difficult for many electronics hobbyists to access these interfaces on their own. Although a modification like what I did is technically possible, I don’t consider it practical for an average electronics/computer enthusiast.

Future Ideas

One avenue I’ve thought about pursuing is a mod board that is soldered onto the Atomic Pi’s PCB, allowing a more robust connection for the PCIe lines; this would include signal integrity improvements like ESD protection and PCIe line redrivers to strengthen the signals to/from the SoC and peripheral(s). PCB design is currently beyond my scope of knowledge, but it would be a fun way to dive right into the subject matter.

eMMC Adventures, Episode 4: Recovering data from physically damaged BGA eMMC Flash storage chips

As seen on Hackaday!

The ball grid array (BGA) chip package has been instrumental in getting modern electronics to fit in smaller and smaller spaces, as it uses tiny balls of solder on the bottom of the package to make electrical connections, instead of copper leads on the edge of the chip package. This allows for hundreds of connections to be made in a small amount of PCB area, but their size also makes them very vulnerable to damage as well.

One common way for BGA chips to become damaged is called “pad cratering“, where the copper pad on the package’s substrate (basically a wafer-thin circuit board) separates and leaves behind a crater.

In the case of eMMC (Embedded MultiMediaCard), its package type is known as an FBGA (Fine-pitch Ball Grid Array), so the area of each pad is very small (0.4 mm in diameter!); it doesn’t take much at all to crater the pad – even gently removing solder with solder wick and generous amounts of flux can still cause damage! Most of the pads on an eMMC package are unused, but if any one of the DAT0, CLK, or CMD pads are damaged, then the chip is rendered unusable, even if the chip is placed into a socket for data recovery. If the DAT1-DAT7 pads are damaged, data recovery becomes much slower as the chip is forced to use fewer lines to transmit data (the MMC standard supports data over 1, 4, or 8 lines).

However, there is some hope. Many FBGA packages, including eMMC, use pads that are SMD (solder-mask defined); this means the solder mask is what defines the size of the pad, not the copper itself. Therefore, when the pad gets cratered, often there is a “halo” of copper left behind that still has a chance of getting an electrical connection.

The trick is how to to get a flat conductive area that a chip socket can use to get a reliable connection (without a copper pad, soldering is no longer an option). The eMMC socket adapter I used breaks out the eMMC onto an MMCplus-shaped PCB that can be inserted into any commercial SD card reader.

Filling in the Blanks

There are a few possible materials that can be used to restore contact area on a damaged BGA pad. One possible option is a silver-filled conductive epoxy, but I have not tested its efficacy; an additional consideration is that the volume of the filled-in crater might not be enough to get a filling with sufficiently low resistance for a good connection.

Another option is using solder paste, which I used in this case. Unfortunately, solder’s surface tension is our enemy when trying to fill in a flat area (it wants to form cohesive balls and therefore won’t want to stick to the ring of copper left around the crater), so a means of forcing the solder into the crater requires something flat, rigid and capable of handling the high temperatures experienced during soldering.

At first I tried Kapton (polyimide) tape, but that was a massive failure since it didn’t have the rigidity to stay flat when the solder paste began to melt, and the liquid flux rendered the adhesive useless.

The solution to the issue came in the form of glass. Specifically, I used very thin (0.15 mm thick!) glass “cover slips” normally used to prepare specimens for viewing under a microscope. These can be very inexpensive and one can obtain hundreds of them for a few dollars. The key is to fill the craters with the solder paste by using a knife as a squeegee, then placing the cover slip on top of the eMMC and reflowing it.

It took a few iterations for the pads on some of my eMMC chips to be restored sufficiently, as the volume taken up by the solder will be less than the paste and its accompanying flux. It doesn’t have to fill the entire crater – it just needs to be enough for the eMMC socket’s pins to make a solid connection.

Conclusion

The high-density nature of modern BGA chips is both a blessing and a curse. When trying to do data recovery from devices that use such tiny chips, such as eMMC or UFS Flash storage, sometimes the desoldering process is too much for the chip’s pads to handle. With some ingenuity (and thin glass), it might be possible to temporarily restore enough conductive pad area to get the data off with the help of an eMMC socket.

Ramble: 2018 in review

Can you believe it? Another year has gone by in what seems like an instant – and boy has it been quite the year for the blog.

Smash Hits

This year has seen quite a few popular posts, with my blog post about building my own memory card seeing a whopping 11,450 views in March alone, totaling 18,195 views this year; in fact, March represented the second-largest view count of all time on my blog with 23,955 views, a tad under July 2015’s 25,100 views. My blog post about running Doom on an oscilloscope netted 5,670 views, and another post where I fixed an Intel Atom-based tablet well beyond economic repair received 2,700 views. Interestingly enough, my blog posts about the Kentli PH5 Li-ion AA battery (both its teardown and review) received 5,280 and 3,250 views, respectively – both without seeing any significant external referrals except through search engines; this also applies to the 2,900 views on my Kitchenaid induction cooktop blog post, which seems to imply that plenty of these cooktops are encountering problems in the field.

Views, Views, Views!

This year’s view count is the second best on record, scoring 126,250 views, compared to 2016’s 140,000 views. This is a good comeback after 2017’s significantly reduced viewership which only saw 99,390 views, and is a decent step ahead of 2015’s 120,140 views.

However, it appears the number of views from each visitor has decreased over the years (that is, it appears that readers aren’t staying as long on my blog as they used to). The drop began in mid-2016 after I changed my blog over to my current ripitapart.com domain instead of the .wordpress.com subdomain that it used to be. Perhaps this is a direct consequence of my domain change, or maybe it’s just a coincidence and readers just don’t stick around as long anymore.

This (Ad) Space For Rent

This marks the first full year that I’ve taken advantage of the WordAds program, allowing me to monetize the advertisements that appear on my blog as a natural consequence of running on WordPress’ Free hosting tier.

This year brought in $194 USD in ad revenue, which has helped pay for my domains and G Suite registration through WordPress in full (totaling $125 USD per year for three domains and G Suite). This means that simply keeping the blog alive no longer is a strain on my wallet, which is a tremendous help for me.

Looking Forward

As we say goodbye to 2018 and welcome 2019 with open arms, there’s always room to grow the blog further. I’ve been considering avenues like running a vlog on YouTube, and maybe even viewer contribution programs like Patreon (although recent issues with the aforementioned platforms have given me pause).

I still have a bunch of blog posts simmering on the back burner, so to speak. Some of these include data recovery from physically damaged eMMC modules (yes, I’m still doing stuff with eMMC 🙂 ) and upgrading the RAM in the cheap tablet I mentioned earlier. The upcoming year will be full of changes in my personal life as I finish my post-secondary education and begin my search for full-time work.

All in all…

Happy New Year! Thanks to all my viewers – I couldn’t have come this far without you! –Jason

Performing safer AC line voltage measurements using isolated amplifiers

DISCLAIMER: AC line (mains) voltage is not something to be taken lightly! Attempting to safely handle line voltages while minimizing the risk of harmful or fatal electric shock is the main motivator for me to design and build this circuit. However, I am no electronics engineer and I definitely have no formal training on international standards pertaining to high-voltage safety. I accept no responsibility, direct or indirect, for any damages that may occur if you attempt to make this circuit yourself, including personal harm or property damage. Additionally, there is no warranty or guarantee, express or implied, on any content pertaining to this blog post (or any other posts).

UPDATE (November 19, 2018): Added isolation voltage ratings for the amplifier and DC-DC converter.

As seen on Hackaday!

Back in mid-2017 I won a Keysight DSOX1102G digital storage oscilloscope (DSO), a piece of equipment long on my wish list but never acquired until then. One thing I’ve wanted to be able to measure with an oscilloscope for a long time was the waveform of the AC utility (in other words, the wall outlet). However, doing so presents a very real risk of blowing equipment up or shocking yourself (and possibly other people). In order to prevent this, I needed a way to perform measurements on the AC line without being directly connected to it; in other words, I need galvanic isolation.

Isolation Methods

There are many different ways to achieve galvanic isolation. Common methods are the use of transformers and optocouplers, but they each have their own disadvantages.

Optocouplers (aka optoisolators) are a common component used for isolation, but they require a fair bit of external circuitry to work correctly – not to mention its current transfer ratio (CTR) varies with temperature and age, resulting in drifting measurements over time if a feedback circuit isn’t used. They also aren’t very fast; the common Sharp PC123 optocoupler has a cutoff frequency of only 80 kHz and a response time of 3-18 µs (but newer ones can be much faster).

Transformers don’t require active circuitry and would make stepping down voltages simple. However, their inductive nature causes issues when measuring waveforms with low-frequency content and sharp edges (like the output from modified sine wave inverters), resulting in inaccurate measurements due to the ringing and other distortion that the transformer creates. Additionally, common iron-core transformers aren’t very good at capturing frequencies above 20 kHz.

Solution: Isolation Amplifiers!

I settled on using an isolation amplifier to provide the necessary protection from the AC line and the oscilloscope. Several years ago TI provided sample kits for electronic motor drives, with one component being the AMC1200 isolation amplifier; this is the IC that I used in my AC waveform viewer – however, note that there are some limitations that I will address later in this blog post.

The AMC1200 uses TI’s digital capacitive isolator technology, using high-voltage SiO2 (silicon dioxide) dielectric capacitors on the chip itself for high voltage protection. The amplifier’s input is essentially digitized using a sigma-delta modulator, whose output is then sent digitally across the isolation barrier before being demodulated back into an analog output. It is rated for a working isolation voltage of 1200 Vpeak (848 Vrms), and a maximum isolation voltage of 4000 Vpeak (2828 Vrms), well above the typical voltages experienced on a 120V line.

AC Waveform Viewer Construction

 

As with most of my projects, my AC waveform viewer is built on FR4 fiberglass perfboard. The isolation components used are the AMC1200 isolation amplifier by Texas Instruments, and its corresponding power supply is the NXE1S0505MC isolated DC-DC module by Murata. It is rated for reinforced insulation up to 125 Vrms and basic insulation up to 250 Vrms, with a production-tested Hi-Pot rating of 3 kVDC. It does provide reinforced insulation at the voltages used in North America, but is still the weaker link in terms of maximum isolation voltage.

The AMC1200 features differential inputs and outputs, with a maximum input voltage of +/- 200 mV intended for use with low-resistance current shunt resistors.

One potential problem with perfboard is that the through-holes compromise the high-voltage isolation of the circuit (reducing the creepage and clearance distances), acting as multiple series spark gaps. The solution to this is similar to how isolation slots are used on commercial PCBs; that is, drill out the holes! This greatly increases the distance between each side of the circuit and improves the safety of the circuit.

The AC voltage input is scaled down to a manageable level via a resistive voltage divider. I used four high-precision 300kOhm resistors in series, plus a 1kOhm resistor placed across the AMC1200’s input terminals. Since the input is floating thanks to galvanic isolation, I decided to place the amplifier’s input in the middle of the voltage divider (that is, 600kOhm of resistance is present from the neutral and line terminals) to provide some extra protection from harmful electric shock; 120 Vrms / 600 kOhm = 0.2 mA to ground is the maximum amount of current that could possibly flow if I were to contact this floating node on the amplifier (this calculation assumes that my body has zero resistance, but human skin resistance is generally much higher than this). The voltage divider and the AC input terminals of my waveform viewer are further insulated with a layer of clear epoxy for even more protection.

The power supply terminals are fused with a 500 mA fuse before being protected by an SMAJ5.0A TVS (transient voltage suppression) diode and filtered with a 22 uF tantalum capacitor. The AMC1200’s output terminals are protected with 5.1 V Zener diodes at the terminal blocks for ESD and overvoltage protection.

Due to the floating nature of the waveform viewer, this essentially is a differential probe for my oscilloscope (and most high-voltage differential probes actually aren’t isolated!).

Circuit Limitations

No circuit is perfect, and mine is no exception. Here’s a few issues with my circuit that I’d like to address:

Isolation limitations

The AMC1200 only provides “basic insulation“; that is, it will provide protection from electric shock as long as its insulation barrier is not damaged (in other words, there is no redundancy). Circuits that have terminals that can be directly touched by humans needs “reinforced” or “double” insulation to be compliant with international regulations.

The NXE11S0505MC isolated DC-DC converter has a maximum working voltage of 125 Vrms for reinforced insulation and 250 Vrms for basic insulation, with a Hi-Pot test at 3 kVDC. This is lower than the AMC1200’s maximum voltage of 4000 volts, but these should still have enough headroom to keep me safe in the event of a mild voltage spike. It might prove useful to add some sort of surge suppression with a MOV (metal oxide varistor) or similar device.

The perfboard layout is also sub-optimal for the sake of isolation. Despite drilling out a row of holes to increase the creepage and clearance distances, it isn’t quite enough to meet regulations, as the clearance is only 3 mm and the the creepage isn’t much better, around 4 mm. This is still more than enough to withstand normal AC line voltages, but there is always a chance that higher-voltage transients will make their way onto the line and the isolation barrier needs to take this into account.

Output limitations

The AMC1200 provides a differential output that is centred (common-mode voltage) at 2.5 V, which can be an issue with single-ended inputs like that on an oscilloscope. I’ve worked around this by using a floating power supply, like a USB power bank, and connecting the oscilloscope’s ground terminal to the AMC1200’s Vout- pin. Also, the AMC1200 has a limited bandwidth of 60-100 kHz, but for the purposes of waveform monitoring it is sufficient; however, the amplifier’s noise and offset also negatively impacts performance as the high attenuation ratio essentially amplifies these values to the point where the AC waveform looks like a 2 Vdc offset and the noise level is so high that I need to use the averaging or high-resolution acquisition modes on an oscilloscope to get a clean waveform.

Power supply limitations

The NXE1 isn’t quite suited for such a low-power task as operating a single amplifier input. According to the datasheet, the output voltage can rise to twice the rated voltage if it is loaded with less than 20 mA. To combat this, I placed a 5.1 volt Zener diode across the output to provide regulation, which unnecessarily wastes power. Another regulated module like the NXF1 series would be a better choice, and the unit cost at one-off quantities isn’t a huge deal either.

Room for improvement

With this circuit working properly, I had plenty of ideas to make the second iteration even better:

Simultaneous voltage/current inputs

With the ability to measure current, I can perform measurements on the current draw of a device, allowing me to determine the power factor of a device.

True single-ended outputs

Most ground-referenced devices like oscilloscopes are not meant to handle differential inputs directly. Multimeters, especially battery-powered ones, are an exception.

Reinforced insulation rating on amplifier

The AMC1200 is only rated for basic insulation, so having an amplifier rated for reinforced insulation would provide greater electric shock protection. Alternatives like the Silicon Labs Si8920 could be a viable solution.

Waveform captures

 

 

Conclusion

Despite its ubiquity, AC power is a force that must not be taken lightly. Performing measurements on it, especially when viewing its waveform on a non-isolated oscilloscope, requires extreme caution as line voltage (especially in countries where 230 V is common) can easily injure or kill.

Using a voltage divider and isolation amplifier allows for safer measurements of the AC line without introducing distortion, especially compared to transformer-based implementations; this is critical when measuring the waveforms of modified sine wave inverters.

My implementation of an isolated differential probe helps protect me from electric shock when making measurements, while costing much less than a commercial high-voltage differential probe (for example, the CT2593-1, costs almost $330 USD on DigiKey).

But… which one would you trust more?

eMMC Adventures, Episode 3: Building a custom adapter to use cheap eMMC-based 32GB SSD modules

As seen on Hackaday!

While on my quest for more eMMC-based storage devices, I stumbled upon a few devices that piqued my interest: eMMC-based SATA SSDs! I found two models of particular interest: Dell had M.2 modules with a 2.5″ adapter, and HP had custom boards intended for use in cheap laptops (for example, the HP 14-an012nr). Although the former was easier for me to use (but not acquire), I will be focusing on the latter in this blog post.

Overview of HP 14-am/14-an Series SSD Module

Unlike Dell’s convenient M.2 modules, the cheaper boards from HP (costing about $12 USD when I purchased them) had a physical interface intended for use only with its intended host; despite using a SATA interface, physically it used a 10-pin FFC (Flat Flexible Connector, aka “ribbon cable”) since it was designed to work only with HP’s 14-am/14-an series of  low-cost laptops. The boards are labeled “DINERAMD-6050A2862201-DB-A01” and have a copyright date of 2016 in my case.

The BayHub OZ788WR2 Bridge Chip

These eMMC-based SSDs use a curious little chip, the BayHub OZ788WR2 (labeled 788WR2A on the chip itself). It is an SD/MMC-to-SATA adapter, with an SD UHS-II/MMCplus HS200 device interface and SATA II 3Gbps host interface. Apart from the brief description from the manufacturer, no other data is available for the chip (and even finding the chip online is basically impossible).

It’s a shame that so little is known about this chip (and that it’s so rare to find in actual devices), especially since high-performance SD-to-SATA adapters otherwise do not exist, as they use outdated SD-to-CompactFlash adapter chips that are limited to 25 MB/s speeds. If I had the engineering expertise, time, money, and ability to acquire these chips, I’d totally try to make an SD-to-SATA adapter with this chip… but alas, that will still remain a fantasy.

Step 1: Pinout Discovery

The single connector on the eMMC SSD is a ZIF FFC (Zero Insertion Force, Flat Flexible Connector), with no publicly available pinout or any other information. Perhaps this was why I got them for so cheap – apart from holding only 32 GB, nobody could even use them in their own computer even if they wanted to!

When trying to reverse engineer an unknown connector pinout, one needs to first look for ground pins. This is easily accomplished by using a multimeter with a continuity or diode test function, with the multimeter’s positive lead on a known ground point on the DUT (Device Under Test) – screw holes are often good candidates to look for. Ground pins will read as a short, but active IC and power pins will look like a forward-biased diode – appropriately 0.5 to 1 volt. I found 3 power pins (these are often grouped together on connectors for greater current capacity), 3 ground pins, and 4 SATA data pins. The data pins don’t show up on the multimeter test since they have series AC coupling capacitors, but they are easily located next to the connector and have clearly visible differential pairs leading to them.

The issue now is trying to find what order the SATA data pins are in, and how they relate to a regular SATA interface. As it turns out, the pinout is very simple: it matches the pinout of the 7-pin regular SATA interface! This makes sense as the SSD module and the laptop itself are designed to be cheap to manufacture.

Step 2: Building the Adapter (Take 1)

With the pinout known, the harder part is wiring up the connector. However, without a matching connector for the ribbon cable, I have no choice but to solder to it.

As I soon learned, not all flex cables are made of heat-resistant polyimide (aka Kapton) – this one melted before I could even tin the exposed leads. No matter, I’ll just use my trusty magnet wire and hook up the data and power lines! With the help of a salvaged SATA connector from a dead laptop drive, I was able to cobble together a crude adapter for the eMMC SSD board.

Although I didn’t end up taking a picture of the adapter, it wasn’t pretty. It also wasn’t very functional either – although the eMMC SSD board was able to identify itself (on my PC it showed up as a “BHT WR202HH032G E70211F5”), I couldn’t actually perform any data transfer without causing the OZ788WR2 to log hundreds of interface checksum failures (but hey, it supports S.M.A.R.T. data reporting!).

After some tweaking of the wire spacing, I was able to get the adapter stable enough to work, and encased it in hot glue for protection. It lasted a few weeks but eventually stopped working because one of the data wires broke off inside the blob of hot glue. Additionally, the outer contacts on the ribbon cable connector were peeling away from its plastic substrate. It was time for a rebuild.

Step 3: Building a Dedicated eMMC SSD (the teaser!)

Since I had multiple eMMC SSD boards, I took one, replaced the eMMC with a 128GB one from Samsung (the KLMDG8JENB-B041) and removed the ribbon cable connector. In its place, I used some very thin twinaxial cable from a dead MacBook and used a gutted CFast-to-SATA adapter for a shell. Stay tuned for that in another blog post!

Step 4: Building the Adapter (again!)

Much like my previous attempt, I used a salvaged PCB from a dead laptop drive, but left a lot of it instead of chopping it off directly at the connector. This particular one was a dead Samsung HDD, and it had one particular feature that I could use to make a stronger adapter: it had a TSOP footprint for the DRAM cache, which was just the right pitch for me to solder the ribbon cable to!

With a little help of my hot air rework station, I removed the DRAM cache and DC-DC converter, leaving the SATA AC coupling capacitors and the power input components (filtering choke and capacitors, and input overvoltage protection) behind.

After scraping off some solder mask, I soldered the SATA data wires and the ground wires surrounding them with very thin magnet wire, trying to keep the data pairs as close to each other as possible to minimize the chance of interference causing problems. The power wire was soldered to the power input components, right next to the input capacitor for better power delivery.

After checking with the multimeter that no short circuits were present, I hooked up an eMMC board and plugged it into my PC. It enumerated without issues, and running several tests including CrystalDiskMark, h2testw, and Hard Disk Sentinel’s random read test, amassing several hundreds of gigabytes in reads and writes with zero CRC errors logged in the S.M.A.R.T. data.

With everything checked out, I cleaned the circuit with isopropyl alcohol and covered the exposed end of the ribbon cable and the magnet wires with clear epoxy for protection. I also used a bit of epoxy on the flex connector to re-secure the lifted contacts to the substrate.

Conclusion

With a bit of wire and a circuit board from a dead HDD, I was able to reuse cheap eMMC-based SATA SSDs on computers that they weren’t meant for (and they even had copies of Windows 10 Home with extractable license keys! 🙂 ). Although not as fast as a modern full-fledged SSD, its relatively high 4K IOPS performance means it works well enough as a quick boot drive for running quick tests of OS installation without needing to sacrifice a bigger drive just for testing – and they consume less than a watt even when fully active!

Gaining access to the Windows CE desktop (and Doom!) on the Keysight DSOX1102G Oscilloscope

As seen on Hackaday!

TL;DR – Yes, the Keysight 1000 X-Series oscilloscope runs Doom! The journey getting there wasn’t easy, though.

The oscilloscope is one piece of equipment that any self-respecting electronics enthusiast should have. In short, oscilloscopes let you view the electronic waveforms of a circuit, and digital storage oscilloscopes (DSOs) are especially useful since they can reveal infrequent glitches on signals that an analog oscilloscope or a multimeter wouldn’t pick up.

DSC_2506

Keysight DSOX1102G oscilloscope

The subject of my blog post is the DSOX1102G from Keysight Technologies (formerly Agilent), which is part of their low-end offerings that still offer very good value compared to their competitors. As with most of their oscilloscope offerings, they run an embedded operating system called Windows Embedded CE 6.0 (AKA Windows CE or WinCE), but as with most WinCE applications, you almost never see the Windows interface since it’s hidden behind a custom user interface.

Stage 1: Awakening

When the Keysight 1000-X series of digital oscilloscopes first launched in early 2017, one reviewer on Hackaday noticed that certain data-saving features on the oscilloscope would cause it to crash and reboot, noting that a mouse pointer was visible on-screen for a few seconds before the scope rebooted. That post included a GIF of him attempting to save a file which caused the oscilloscope’s software to crash, but I noticed something peculiar in one frame of the video… there was a Windows taskbar visible right before the black error screen was displayed. Interesting…!

frame_060_delay-0.1s

Freeze-frame of oscilloscope on Windows CE desktop shortly before crashing (image courtesy of Hackaday)

When I won my oscilloscope thanks to Keysight’s Scope Month giveaway program, I didn’t think much of this for a couple months until I encountered the crash screen as well. In my case, I found that the Windows CE title bar was visible on top of the oscilloscope’s crash handler; dragging the title bar simply ghosted the window on-screen and doing this a few times more caused Windows CE itself to hang. This was a pretty infrequent occurrence, so when I encountered subsequent crashes I simply let the oscilloscope’s crash handler scan the internal file systems and reboot the operating system.

However, by this point I was intrigued and wanted to find a way to learn more about what’s going on with the underlying Windows CE operating system. I found that the oscilloscope’s USB port is rather sensitive to errors and simply wiggling a USB drive in in the USB port would crash the oscilloscope. This still wasn’t enough for me to gather enough information since I could not get the oscilloscope to do this consistently.

Thus begins my quest to access the Windows CE desktop. Let’s go!

At first I tried a software-only solution, attempting to craft a .ksx firmware update file (in reality it’s just a .cab archive) that would close the oscilloscope software and open Windows Explorer – no dice. The oscilloscope software would simply throw an error message saying that it couldn’t open the update file. As it turns out, this solution wouldn’t have worked even if I could get the oscilloscope to load the update file since the oscilloscope’s software doesn’t exit to the desktop during the firmware update process anyway. Having encountered my first significant roadblock, I set my curiosity aside and simply used the oscilloscope as, well, an oscilloscope for a while.

Stage 2: Looking Deeper

True to my curious nature, one day I decided to see whether the oscilloscope would read and write to 3.5″ floppy disks (or as the young ones might call them, “3-D printed save icons” 🙂 ) using a USB floppy drive – and it did! However, I noticed one very peculiar issue: the oscilloscope would crash on boot if I left the floppy drive in the USB port when I powered it on. Eureka! – I had finally found a means to reliably crash the oscilloscope.

Unfortunately, this is where I hit my second roadblock. This crash-on-boot phenomenon would only occur if the floppy drive was the only device plugged into the USB port; it would not crash if I used a USB hub between the oscilloscope and the floppy drive. This meant that I would have to be very fast in order to switch between the floppy drive and a USB mouse/keyboard. Racing against the clock to unplug the drive and plug in my combination USB keyboard/touchpad in the middle of the boot cycle was getting tiring and frustrating very quickly. I needed a better solution… a hardware solution.

20181015_011332

Custom USB A/B switcher I built to quickly swap USB devices

Using an old USB cable, a dead USB hub and a DPDT (dual-pole, dual-throw) pushbutton switch, I created a USB A/B switcher to make the process of switching between devices fast and easy. Using this method, I was able to try interacting with the Windows CE operating system for the fraction of a second that the taskbar was visible on-screen before the crash handler barged in to ruin my fun. With the magic of my Samsung Galaxy S9’s slow-motion video feature, I was able to determine that I could send keystrokes and Windows CE would act on them accordingly – even while the system was still on the splash screen! I was able to somewhat get information about the system by blindly entering keystrokes and seeing what the output was when the oscilloscope software crashed. Enter roadblock number 2…

The ability to very briefly interact with Windows CE was great, but it was still useless since I couldn’t actually take control of it before the oscilloscope’s crash handler rebooted the system. It appeared that the crash handler had a pretty tight lock on the operating system since no amount of mashing the Windows or Ctrl+Alt+Delete would let me back into Windows CE.

Stage 3: Getting a Foothold

Once again, my random curiosity would come in handy when I decided to use my old Sony Clie PEG-NX73V (a PalmOS handheld PDA dating back to 2003) as a USB drive; it had a Data Import feature which allowed the user to drag-and-drop files onto its memory card as if it were a removable disk.

Much like the USB floppy drive, a similar crash-on-boot effect occurred when I left the PDA plugged in when powering on the oscilloscope. But… unlike the floppy drive, the oscilloscope’s crash handler seemed to interpret the PDA’s file system as a corrupt firmware partition and asked to load a firmware update file from an external USB drive.

20180830_160455

Firmware update prompt

This wasn’t very consistent behaviour, as I found that sometimes the oscilloscope software would load anyway and resulted in a very strange Windows CE window appearing with a bright-cyan mouse pointer that ghosted the screen if I attempted to move it aside. However, in this limbo-like state I was able to drag the InfiniiVision oscilloscope software’s window aside and tried mucking around the operating system that way. However, the oscilloscope software was very aggressive and would regain focus every time I clicked behind its window; after some struggling with the system’s very strange colour palette I was able to somewhat fumble my way around the operating system. I couldn’t browse the file system since I couldn’t wrestle control from the oscilloscope software long enough to do so, but I was able to bring up the System Properties dialog box which revealed that the oscilloscope is based on Windows CE 6.00 and had 100 MB of RAM at its disposal.

20180830_012949

Accessing some Windows CE dialogs while InfiniiVision software still running – what a mess!

I then decided to browse the EEVblog forums, where community members there were hard at work trying to hack extra features into the oscilloscope. From there I found out that the firmware looks for a file called “infiniiVisionStartupOverride.txt” on the USB drive’s root, and if it does this, it will try to load the oscilloscope software from there. Although it appeared that the firmware didn’t actually load the software from the USB drive, it did stop the oscilloscope software from starting up and pulling control away from me. This is where things get really interesting – the crash handler opens an Explorer dialog box, and simply entering *.* into the file name textbox would let me begin browsing the oscilloscope and USB drive’s file systems! This is exactly what I needed to being controlling Windows CE! However, I was once again presented with another roadblock: the oscilloscope would reboot after 60 seconds which limited the amount of time I could browse the operating system.

 

 

After copying a few Windows CE tools like Windows CE Task Manager, I noticed that there were two interesting processes that were still running when the crash handler was visible, “recoverInfiniiVision.exe” and “processStartupFolder.exe”; it seemed like the recoverInfiniiVision process was indeed the crash handler that was preventing me from accessing Windows CE when the oscilloscope software crashed. Killing the processStartupFolder process with iTaskMgr (Windows CE Task Manager’s trial version can’t kill processes) was enough to prevent the oscilloscope from rebooting, and killing the recoverInfiniivision process left us with a blank Windows CE desktop – I was in! Unfortunately, I was unable to restore the taskbar which made navigating the OS quite cumbersome.

After creating a new folder on the desktop to open Explorer, I was finally able to do some real exploration (heh) of the oscilloscope’s file system. The tool Total Commander/CE was of great help since it also had a built-in text editor, which this version of Windows CE lacked.

20180830_110105

Browsing internal file system with Total Commander/CE (no taskbar just yet…)

Stage 4: Full Control

Now that I was able to see the blue sky desktop, all I needed for the full Windows CE experience was to bring back the taskbar. After a bit of Google-fu and browsing Stack Overflow, I whipped up some sample code to turn the taskbar back on. After opening this from Explorer, I had a full Windows CE desktop! YES – finally I had full control over the underlying operating system!

20180830_160618

Freedom at last – a full Windows CE desktop on my oscilloscope!

From there, I began looking through the file system to see what interesting utilities were in store. It appeared that the Command Prompt wouldn’t open at all when I tried to run it, but digging through the Registry with an editor, there was a Registry key at HKEY_LOCAL_MACHINE\Drivers\Console\OutputTo that was set to 0xFFFFFFFF. Setting that key to 0 was enough to make the Command Prompt visible on the desktop, so I created another small program to take care of this as well.

Things were looking good, so I created a batch file with all the commands required to kill the oscilloscope software, the startup folder handler, the crash handler; restore the taskbar; and re-enable the Command Prompt. However, this required I use my PDA to open the crash handler’s firmware recovery menu which prevented others from being able to reproduce this effect.

After some digging around, I found out that as soon as the splash screen appeared and the front panel LEDs began cycling, Windows CE would accept any keystrokes even without a software-crashing device present; pressing Windows and U caused the oscilloscope to hang as I was essentially opening the Start menu and selecting the Suspend option (which I guess the operating system had no means of regaining control since the oscilloscope only had a hardware power switch). With this in mind, I renamed my batch file to “a.bat” for easier typing, and tried to launch the batch file right in the boot process by pressing Windows, R (to open the Run dialog), then “\usb\a.bat” and finally Enter to run the batch file. This only caused the oscilloscope to remain at the splash screen but the Windows CE operating system was otherwise alive even if I wasn’t able to see what was actually going on. As it turns out, the crash handler is the key to making the Windows CE desktop visible, and all I had to do was add some lines in my batch file to launch and subsequently kill the crash handler. With the final touches added to my batch file, I can (semi-)automatically boot the oscilloscope right to the desktop, with just a USB drive, mouse and keyboard!

Stage 5: Yes, it runs DOOM!

Now that I had access to Windows CE, I could finally put an answer to the question “Does it run DOOM?” As a matter of fact, yes it can! It only took a year and a half after the oscilloscope’s launch, but this milestone had finally been achieved.

 

 

Stay tuned for my next blog post where I play around some more with this iconic video game – on a piece of hardware that was never intended to play games in the first place.

giphy

Doom in action at glorious 320×240, 256 colours! On an oscilloscope!

Downloads

I have released the files required for you to try this on your own scope – but be warned, I am not responsible if you brick your scope or anything else bad happens! I have only tested this on my DSOX1102G but I suspect that the rest of the 1000 X-series and other Keysight scopes that have the firmware recovery option may work too. The oscilloscope firmware is laid out such that the Windows CE file system is all in RAM and is not retained upon reboot, so any system-breaking changes to Windows will not brick the scope (the firmware files are found in hidden NAND Flash-resident directories that aren’t accessible from Explorer unless you enter them manually by name). Please leave a comment if you decide to try it yourself, and as to whether or not it works. 

You will need to format a USB drive with FAT or FAT32 and extract the .zip archive of my tool, Scope Liberator (click here!), to the drive’s root. Instructions are found in readme.txt.

If you’re interested in the source code for the support programs to re-enable the taskbar and local command prompt, I have made them available as well (they’re literally lines derived from sample code, but at least the icons for ShowTaskbar.exe and EnableLocalCmd.exe are original!).

Upgrading a passive Power over Ethernet splitter with 802.3af compatibility

As seen on Hackaday!

If you haven’t heard of Power over Ethernet, chances are you’ve experienced its usefulness without even knowing about it. Power over Ethernet (PoE for short) does exactly as the name implies: power is sent over the same Ethernet cable normally used for data transfer. This is often used for devices like IP phones and wireless access points (often you see these APs in restaurants and other establishments mounted to the ceiling to provide Wi-Fi access), as it is far easier, cheaper and safer to provide low-voltage power instead of wiring in AC power which requires the help of a licenced electrician.

A (Very Simplified) Background on Power over Ethernet

The actual PoE standards (click here to learn more) IEEE-802.3af (up to 12.95 watts), 802.3at (up to 25.5 watts) and the newest 802.3bt (up to 60-90 watts) standards provide vendor-independent methods for sending and receiving 48-volt DC power over the Ethernet cable without frying the device on the other end if it’s not equipped to receive power. The PSE (power sourcing equipment) manipulates the Ethernet pairs to sense the presence of a PD (powered device), then queries what power level it should provide; after this negotiation phase, the PSE finally sends 48 volts to the PD (usually on pins 1/2 and 3/6, called Alternative or Mode A) and all is merry, thanks to the help of “phantom power“. However, cheaper devices are available which skip this and simply shove DC power over the Ethernet cable with no regard to the safety or well-being of the remote device – this is called “passive PoE”. There are no regulations regarding passive PoE, but they generally send DC power (often 12, 24 or 48 volts) over Ethernet pins 4/5 and 7/8 (called Alternative or Mode B), usually shorting the two pins on each pair for easy power transmission at the expense of being limited to 10/100 Mbps speed.

Many years ago (I’m talking back in high school, over 6 years ago), I bought some cheap PoE equipment – a D-Link DWL-P200 PoE injector and splitter kit – assuming it was compatible with 802.3af due to its use of 48 volts… it wasn’t. Since I bought this on a trip to the US and I live up in (the arguably nicer 🙂 ) Canada, I couldn’t be bothered attempting to return it to the Fry’s that I bought it from; it served some use powering a wireless router for a few years before I ditched it in favour of a ZyXEL GS1920HP-48HP 802.3at-compatible switch and Ubiquiti UAP-AC-PRO access point. It then sat in my junk bin for a while before I took it back out and conjured up a solution to make the splitter compatible with the PoE standard; this way I could tap into my existing 802.3at-compatible infrastructure I wired into my house (or perhaps use it to siphon a couple watts in other places 🙂 ).

Note I am using the word “compatible” and not “compliant” since this definitely does not attempt to comply with all of the electrical specifications contained in the 802.3af/at standards; however, I have tested this on 802.3af and 802.3at Ethernet switches and have had no issues with the upgraded splitter. One significant attribute is that true PoE requires electrical isolation and my splitter certainly does not provide it; for my use this isn’t an issue and even some commercial splitters omit this feature to reduce cost.

Modifying the D-Link DWL-P200 for 802.3at Compatibility

The DWL-P200 is a near-ideal candidate for conversion to 802.3af/at (I’ll call it “active PoE” from now on) since it already uses 48 volts for power – all it really needs is an active PoE-compatible front-end which requires an Ethernet isolation transformer, two diode bridges, a TVS (transient voltage suppression) diode, a 802.3af PD controller chip (and a partridge in a pear tree?). Easy enough, right… right?

Step 1: Prepare the Power Interface

The DWL-P200 splitter does not use a diode bridge on its power input (pins 4/5 are positive and pins 7/8 are negative), but active PoE requires that PDs include diode bridges for polarity-insensitive operation. Additionally, the splitter does not have an isolation transformer normally used for Ethernet; rather it had 10 ohm resistors on pins 1/2 and 3/6 as series coupling between the input and output – these were removed to provide a spot to install the centre-tapped isolation transformer that active PoE requires for Mode A (power on pins 1/2 and 3/6).

After harvesting an Ethernet transformer from a dead MacBook (seriously, dead computers make for great component stores), I scraped away insulation on the differential data pairs and used 40-gauge magnet wire to connect each pair to the transformer, and used 30-gauge Kynar wire for the power lines which are connected to the centre tap of each pair. To affix it, I used a blob of hot glue (which turned out to be pretty useless since this board runs HOT!), and ran the wires off to one of the diode bridges in the front-end I built.

The data output pins (1, 2, 3 and 6) are terminated to an AC-coupled ground using 75 ohm resistors, often referred to as “Bob Smith termination” to help reduce noise.

Step 2: Build the PoE PD Front-End

The actual front-end was built as two separate boards: the first was the power input board; the second was the 802.3af active PoE PD controller, which had its own construction considerations that I’ll address in a bit.

The power input board is pretty simple and was comprised of two Bourns CD-HD201 60-volt Schottky diode bridges and a SMAJ58A 58-volt TVS surge suppression diode to help overcome voltage spikes that can occur when a cable is unplugged due to the inductance in the cable itself. The inputs of the diode bridges were then connected to the centre taps of the Ethernet transformers and the original pins 4/5 and 7/8 on the power/data input of the splitter.

The second board is the PoE PD controller, which is responsible for negotiating with the 802.3af/at PSE controller at the other end of the cable. I used the Texas Instruments TPS2378 PoE PD controller, which was meant for 802.3at Class 4 (25.5 watts maximum) but I’m only using it for 802.3af Class 0 (up to 12.95 watts). The TPS2378 has a heat-sinking “PowerPAD” on the bottom which must be connected to Vss (ground); I used solar cell tabbing wire underneath and created a sort of fin-like arrangement on the unused area of the DipMicro SOIC/TSSOP-to-DIP adapter board (they don’t sponsor me – I just really like their adapter boards!). The external PoE detection and power class signal resistors were soldered to the DIP pads on the adapter to save space.

Step 3: Put it Back Together Again

Once the two boards were assembled and connected to the original FP5001 DC-DC converter‘s input, the boards were nestled inside the original case and some Kapton tape was wrapped around the case since I damaged some of the plastic clips that held it together during disassembly.

Conclusion

With the active PoE upgrades installed, the splitter now works with 802.3af, 802.3at and passive 48 volt PoE power sources. However, the internal construction of the splitter means it only supports 10/100 Mbps Ethernet. Additionally, I find that the board gets very hot under full load (I’ve measured internal temperatures well above 100 degrees C when the case is closed) which negatively impacts its efficiency, but I consider it a fair trade-off considering this was never meant to work on active PoE in the first place.

WordAds Adventures, Episode 4

Oh my, it has been a while since the last update, hasn’t it?

Results for February to June 2018

Since my last report, I managed to earn $45 in one month after getting my blog post featured on the r/hardware subreddit, which then made its way to Hacker News (a news aggregation site), bringing the most traffic to my site in many years; March 2018’s view count was the second highest on record, bested only by June 2014’s 25,051 views which were not monetized. I even received my first payout on May 29th.

However, the rest of the months have been less fruitful, especially as of late. Since May, my monthly ad revenue has dropped below $10 USD/month. I thought that the boost in traffic from March would result in more views after the fact – I could not have been more wrong.

Let’s look at the data (note that am now also tracking statistics for ads per viewer and visitor):

Payout Period Earnings Ads Served Views Visitors $ Per
Impression
Ads Per
View
Ads Per
Visitor
0 Nov 2017  $     5.03           4,648     8,533      3,833  $  0.00108219 0.545 1.213
Dec 2017  $   15.18         17,369     9,734      4,344  $  0.00087397 1.784 3.998
Jan 2018  $   11.96         17,887     9,428      4,359  $  0.00066864 1.897 4.103
Feb 2018  $   11.20         18,532     7,980      3,595  $  0.00060436 2.322 5.155
Mar 2018  $   45.89         74,754   23,933    14,537  $  0.00061388 3.123 5.142
Apr 2018  $   10.99         20,100     8,017      3,595  $  0.00054677 2.507 5.591
1 May 2018  $     9.88         20,046     7,700      3,522  $  0.00049287 2.603 5.692
Jun 2018  $     7.78         16,699     6,704      2,967  $  0.00046590 2.491 5.628

WordAds Earnings Nov 2017 to Jun 2018

WordAds monthly earnings (November 2017 to June 2018)

WordAds Rate Nov 2017 to Jun 2018

WordAds monthly pay-per-impression rate (November 2017 to June 2018)

Since my last update, I noticed that my pay-per-impression rate was steadily decreasing, and that has not changed since; although it has leveled off mostly, it is still decreasing.

Conclusion

The data sends a clear message: in order to continue receiving an appreciable amount of ad revenue, I need to keep posting and getting my blog posts published on various outlets; additionally, returns from existing content is not steady but instead continuously declines even if viewership does not appreciably fluctuate from month to month. (Granted, that’s probably common sense and I just need to get back in gear and make more posts – many of which are still in draft form.)

Status Update: Of Phones and Fire

Things might be on a bit of a hiatus for the next little while. My trusty Sony Xperia Z5 Compact literally went up in flames a few hours ago, and I need to find a replacement very soon, as well as recover any data that wasn’t saved to my SD card. Thankfully, apart from a sore throat and burning eyes from battery smoke, I am doing fine (as well as my house).

IMG_4711

My Sony Xperia Z5 Compact… after the lithium-ion battery fire.

Once things settle down, I’ll hopefully have a juicy story about lithium-ion battery fires and (failed) eMMC data recovery.

UPDATE (May 18, 2018): I upgraded to a Samsung Galaxy S9 a couple days ago. The eMMC chip I desoldered from the Z5 Compact is effectively bricked, as it only identifies itself but no data can be read – I suspect that the intense heat must have “baked” the NAND flash and result in too many uncorrectable bit errors that the firmware couldn’t recover from. There goes my progress in Angry Birds 2 (among other data)…