Unboxing and review of SanDisk 64GB microSDXC High Endurance Card

Dashcams: they can be a crucial tool when reconstructing events in a vehicular incident, or a source of entertainment when watching compilations on YouTube. Like any modern device, they generally use SD or microSD cards as their storage medium. However, not all cards are created equal.

Cheaper cards, like SanDisk’s Ultra lineup, use cheaper TLC (triple-level cell) NAND Flash that is ill-suited to the harsh working conditions of a dashcam. Not only does the card have to endure temperature extremes, the constant writes can burn through the Flash’s write cycles in short order. In fact, SanDisk specifically denounces┬áthis line of cards for use in continuous-recording applications.

The solution: high-endurance memory cards! These cards (at least in theory) use more durable MLC or even SLC NAND Flash, which can take many more write cycles. I purchased the 64-gigabyte model, the SDSQQNR-064G-G46A.

Unboxing

The card’s packaging isn’t much different than SanDisk’s typical microSD card offerings. The paper-and-plastic package includes a small blister pack that holds the microSD card itself and the full-size SD card adapter, without a carrying case (granted, the memory card is expected to stay inside the dashcam for most of its working life).

The packaging also includes a license key for a 1-year subscription to the RescuePRO data recovery software (although in all honesty, you’d be better off using the free PhotoRec software instead).

Endurance Rating

SanDisk’s lineup of high-endurance memory cards are designed for use in very write-intensive workloads, such as constant video recording.

Unfortunately, the endurance specifications for these cards are (probably intentionally) vague, only providing a set number of hours of video recording. However, we can infer a rating with a little bit of math.

SanDisk’s card packaging defines Full HD video to be 26 Mbps, which is equivalent to 3.25 (binary) megabytes per second. This equates to 11,700 megabytes per hour, or 11.426 gigabytes per hour. With a rating of 5,000 hours at this data rate, we get a specified endurance of 57,128.91 gigabytes written, or 55.79 terabytes written (TBW).

Memory cards, like other block-based storage media, often define capacities with decimal prefixes, whereas computers usually binary. A “64-gigabyte” card is really 59.605 binary gigabytes (“gibibytes“) in capacity, but in this blog post I’m using the Windows notation of gigabytes; that is, calculating in binary but displaying as decimal. ­čśŤ

Therefore, we get a final calculated P/E (program-erase) cycle count of… 936 cycles. This is more in line with traditional 2D TLC NAND Flash, so I suspect that this rating is either based on different bitrates, or SanDisk is being really, really conservative in their estimates – or heck, maybe this really is just TLC NAND Flash that’s being configured and/or warrantied differently by SanDisk. As much as I am tempted to remove the epoxy coating that covers the manufacturing test pads in order to get a NAND Flash signature directly, I like having a warranty for at least a few years. Maybe I’ll buy another card to try this on…

Card Information

Using an older laptop with a true SD-compliant slot (most newer ones are just USB card readers internally), I was able to grab the card’s metadata from Linux. These information files are found in /sys/block/mmcblkX/device, where X is usually 0 depending on your host machine. Android used to be able to do this as well, but nowadays it’s not possible without a rooted operating system.

Item Value
CID (Card ID) 035744534836344780ed1bbb9e013100
CSD (Card Specific Data) 400e0032db790001dbd37f800a404000
Manufacturer ID 0x03 (SanDisk)
Manufacture Date January 2019
Device Name SH64G
Firmware Version 0x0
Hardware Revision 0x8

Initial Formatting

The card is formatted as exFAT, with a 16 MB offset (that is, the first 16 MB of the card is unallocated), with an allocation unit size of 128 kilobytes. It uses a very basic MBR (Master Boot Record) partition structure, with the first sector being the bare minimum to be recognized as a valid structure.

Performance

Now that I’ve probably irked some of my readers with my usage of decimal and binary prefixes, it’s time to see how fast this card can go. SanDisk’s own ratings for the card are very brief, citing a sequential read/write speed of 100 and 40 MB/s respectively. It is rated for the V30 Video speed classification, which guarantees a minimum of 30 MB/s sequential writes continuously.

All the tests below were performed on my desktop computer using a FCR-HS4 USB 3.0 reader from Kingston, which is based on the Realtek RTS5321 chipset.

CrystalDiskMark

CrystalDiskMark is the de-facto standard for storage benchmarks. I’m using the 64-bit edition of CDM, version 5.2.0.

I/O Type Read Write
Sequential QD32 91.80 MB/s 60.56 MB/s
Sequential 93.33 MB/s 61.66 MB/s
4K Random QD32 8.319 MB/s
2129.7 IOPS
4.004 MB/s
1025.0 IOPS
4K Random 8.121 MB/s
2079.0 IOPS
3.971 MB/s
1016.6 IOPS

The sequential I/O speeds are on par with a modern microSDXC card, and the IOPS aren’t too shabby either; they exceed the IOPS requirements for the A1 performance class┬áwhich requires R/W IOPS of 1500 and 500 respectively. This could make this type of card a viable option for other write-heavy environments – this includes single-board computers (SBCs) like the Raspberry Pi, where memory card failures due to excessive writes are common.

ATTO Disk Benchmark

The card’s read/write performance levels off at around the 64-kilobyte mark during testing, showing that operations smaller than this incur a significant performance penalty. This may also be indicative of the internal page and block sizes of the NAND Flash itself.

Hard Disk Sentinel

Hard Disk Sentinel comes with a bunch of disk benchmarking tools, including some to test the entire “surface” of a drive. I used the software’s Surface Test tool to measure the card’s performance before and after filling the drive with data – first with random data, then with all zeroes.

Random Seek Test

The Random Seek Test measures the card’s latency when performing random “seeks”, although more accurately it reads a single sector from a random location.

State Average Latency Minimum Latency Maximum Latency
Empty/Initial (0x00) 360 ┬Ás 350 ┬Ás 420 ┬Ás
Random Fill 600 ┬Ás 590 ┬Ás 670 ┬Ás
Zero Fill 600 ┬Ás 590 ┬Ás 690 ┬Ás

The card initially had about 420 microseconds of latency, but after filling the card with random data, this increased to 670 microseconds. Filling the card with all zeroes again did not improve performance, and his isn’t helped by the fact that SD cards generally lack the ability to “TRIM” unused sectors like SSDs or eMMC chips.

Full Surface Read (or at least an attempt)

This is where things get a bit… interesting. It was around this time that I noticed some performance inconsistencies that didn’t show up on other benchmarks. Although the I/O speeds largely matched what my other benchmarks revealed, I noticed frequent dips below normal, often down to the mid-20 MB/s range! I wasn’t sure that this was necessarily the card’s fault (pauses in read/writes could result in performance degradation on a device if it can’t buffer the writes well enough), or if my card reader/operating system/etc. was responsible.

I decided to hold off on publishing the sequential write test until I get this issue figured out – perhaps it’s worthy of a blog post all on its own…

Resurrecting a dead MacBook Pro (mid-2012 13-inch, model A1278)

As seen on Hackaday!

A couple weeks ago, I picked up a dead MacBook Pro that was on its way to the recycle bin, and was curious as to whether I would be able to fix it. It had a note attached to it citing several issues with the computer: the display doesn’t work, the battery doesn’t charge, one of the USB ports doesn’t work, and it won’t load an operating system. It certainly didn’t look particularly promising, but I felt it would be a good way to test my skills in component-level repair – with a pretty nice prize if I succeeded.

Triage

The computer I picked up is a mid-2012 MacBook Pro by Apple; it is the A1278 model with a logic board number of 820-3115-B, and it comes with an i7-3520M CPU and 8 GB of DDR3 RAM – however, the hard drive was taken out of the computer by the time I received it. As previously noted, the computer had a laundry list of issues that were certainly the reason the original owner decided to discard their computer – a laptop that doesn’t boot nor have a display isn’t a particularly useful one.

Connecting a MagSafe AC adapter to the computer revealed even more issues: even though the unit was already noted that it wouldn’t charge, I noticed there was no LED indicator on the power adapter’s plug, and the computer wouldn’t power on, even with external power connected; the only sign of life was one of the LED level indicators rapidly flashing when I pressed the button. With this functionality test being unsuccessful, I decided to open up the computer to see what else was wrong…

Troubleshooting & Diagnosis

Unscrewing the bottom cover revealed what horrors the computer had experienced. There was clear evidence that it had suffered from liquid damage: rampant corrosion around the LCD connector and some of the power circuitry, and some of the corrosion deposits were even left on the computer’s bottom cover! If you watch Louis Rossmann’s videos, you would know that liquid damage rarely is an easy fix, especially when high-voltage LED backlight circuitry gets involved.

Liberal use of a 70% isopropyl alcohol solution and a brush was able to scrub away all the corrosion on the computer’s logic board, and the results were not pretty:

Many PCB test pads were either corroded or entirely gone, the backlight fuse (and its pads) were nowhere to be found, and some ICs were missing entire pins! Whatever was spilled on this area of the MacBook certainly had some corrosive properties to it, and it looks like nothing was done to stop the initial damage. With a schematic and board-view software in hand, it was time to investigate what particular components had suffered damage.

Power Supply

Before any device can perform any useful functions, it needs power. I reconnected the AC adapter and started to check the voltages around the DC input jack and its surrounding support circuitry. Since I was able to press the MacBook’s battery indicator and get some response when connected to power, I knew that the main input fuse was intact, and that the SMC (System Management Controller) chip was receiving power via the PP3V42_G3H rail and functional; the G3H (G3-Hot) designation means that the power rail is always on, even if the computer is otherwise turned off. I checked the voltage at the DC jack’s ADAPTER_SENSE line, which is normally at approximately 3.3 volts and uses the 1-Wire protocol to communicate with the power adapter and control the LED on the power adapter’s MagSafe plug. To my surprise, it was at a staggering 16 volts, which meant that something was shorting the DC input voltage (about 16.5 volts) onto a low-voltage communication line – no wonder there wasn’t any LED indicator when I plugged it in! A multimeter measurement found about 2 kOhms of resistance from the power line to the communication line. Thankfully the MacBook’s logic board features a MAX9940 1-Wire overvoltage protection chip, which is rated to protect against voltages as high as 30 volts. I scavenged another DC input connector from an older, dead MacBook which shared the same connector and pinout. After connecting this to the logic board, I found that I got a green LED upon connecting the AC adapter, and the CPU fan started spinning; this is a┬ávery good sign as this means the main power circuitry is intact. Measuring the CPU’s Vcore voltage revealed a voltage of about 0.8 volts, which is normal for a modern laptop CPU. With the “heart” of the computer checked out, it was time to focus on the area most affected by the water damage.

LCD & Backlight

Examining the backlight connector and its surrounding circuitry revealed significant damage to many components and the PCB itself. The power supply pins on the LCD connector showed a significant amount of corrosion, and I was concerned that the backlight’s output voltage (up to 52 volts!) could have made its way through all the corrosion residue and damaged critical data lines between the display and the graphics controller. I noticed the backlight fuse (F9700, a 3-amp 0603-size fuse) had gone more than just open-circuit – I couldn’t even find the fuse or its corresponding PCB pads initially! I then probed the LCD connector and found that the display’s 3.3-volt power lines were open-circuit; the corrosion had eaten through the traces between the connector and its decoupling capacitors nearby. Using a diode-mode measurement on the FPD-Link (often called LVDS) lines revealed that the connections were intact; there weren’t any anomalous readings or short-circuits on those lines, the DDC (Display Data Channel) lines, and the 3.3-volt power lines.

Due to the high voltages used to drive LED backlights, I had my suspicions on U9701 (a Texas Instruments LP8550 LED backlight driver). It’s a tiny ball-grid array (BGA) package, and attempts to clean the chip from its edges didn’t seem to do much. Its corrosion looked limited at first – only the feedback line’s probe point was lost – but I was sure the chip was on its last legs (or is it balls?).

Power Management

The LCD connector is in close proximity to the computer’s DC input and its “PPBUS_G3Hot” power rail, which is always on (even if the computer is otherwise turned off), which exacerbates any corrosion due to liquid damage due to its high voltage. Further examination revealed significant corrosion on the outside of the CPU’s high-side current sense resistor (R5400), and the current-measurement pins (pins 4 and 5) on U5400 (a Texas Instruments INA213 current-sense amplifier) were completely gone! Clearly there was no way to salvage that component.

There was significant damage to the SMC’s DC input voltage sense circuitry (“VD0R”), with pins 3 and 4 of Q5490 (an ON Semiconductor┬áNTUD3169CZ complementary pair of N-channel and P-channel MOSFETs) being completely eroded away, much like U5400’s current-sense pins; this part of the circuit uses a P-channel MOSFET to switch on a resistive voltage divider, allowing the SMC to measure what the voltage is on its MagSafe input connector. Also, many of the probe points related to that circuit were also completely eroded, revealing dark pits instead of silver-plated copper pads.

FireWire

The FireWire circuit wasn’t spared from the carnage, either. Pins 3 and 4 on Q4262 (a Diodes Incorporated┬áBSS8042DW┬ácomplementary MOSFET pair) were also severely damaged; these pins are used to quickly disable the FireWire power output transistor (Q4260, an ON Semiconductor FDC638P┬áP-channel MOSFET) in case of a “Late-VG event“. This occurs if the ground pins of the FireWire connector are mated too late when plugging in a device – this creates a dangerous overvoltage condition on the FireWire data lines, as up to 30 volts briefly find a return path through the data lines, risking damage to the device and host controller. I wasn’t as concerned with this circuit, as I don’t have any FireWire peripherals, and the circuit in its current state simply means the FireWire port will be unable to disconnect power if a bad cord is plugged in.

Thunderbolt

The area that had the least liquid damage was C3897, which belonged to U3890 (a Linear Technologies LT3957, a 15-volt boost converter for the MacBook’s T29 chip and Thunderbolt interface). All this area needed was a bit of corrosion cleanup.

USB Port

During the functionality tests, I noticed the metal casings of the USB port were getting very hot to the touch, and I nearly burned myself on U4600 (a Texas Instruments TPS2561, a dual-channel load switch with internal current limiting)! I found a short-to-ground problem on a power line on one of the USB ports, which explains the symptom listed on the note. I desoldered the chip, initially thinking the issue was in the chip itself, but the fault remained. I narrowed the problem down to C4695, a 10-microfarad ceramic capacitor that had short-circuited internally; this caused the TPS2561 to go into current-limiting mode, which turns the chip into a resistor and dissipate copious amounts of heat into the PCB, which made its way to the USB ports (and then my fingers – ouch).

Hard Drive Cable

During the repair process, I was able to install Mac OS X Lion to a SATA SSD, but soon found the MacBook unable to recognize SSDs, despite hard disk drives showing up just fine! As it turns out, the A1278 is notorious for bad HDD cables, with even replacements failing within months of installation. This appeared to be caused by chronic frictional damage, as the cable is sandwiched between the hard drive and the MacBook’s rough aluminum casing – even regular use of the laptop was found to create hairline cracks in the cable. Thankfully replacements are relatively inexpensive, and a little bit of Kapton tape as a barrier against the casing was the “vaccine” against future cable failures.

Repairs

With all of the problems written down, it was time to start fixing up the MacBook. Time to break out the hot air rework station, soldering iron, solder, magnet wire, and plenty of flux!

DC Input Jack

I desoldered the DC input jack, and found there was a lot of corrosion residue bridging the +16.5-volt power line to the ADAPTER_SENSE 1-Wire communication line.

With some isopropyl alcohol and some scrubbing with a small brush, I was able to clean up the corrosion and resoldered the jack into place. A quick multimeter test found that there was no more 2 kOhms of resistance from the power to the data line, and I was able to get an LED indication when I plugged in the AC adapter, including an orange light that indicates the battery is charging.

LCD Connector

I wanted to determine if the display was still functional, so I first focused my attention to the LCD connector, even if I had to eschew the LED backlight for a bit.

I ran a jumper wire from L9004 to pins 2 and 3 of the LCD connector; this belongs to PP3V3_LCDVDD_SW_F, which provides the 3.3-volt power to run the LCD panel except the backlight. After cleaning out the flux and corrosion on the logic board’s connector as well as the LCD cable, I was able to get an image on the display!

USB Port

With the faulty component identified, I replaced C4695 with an identically-rated 10-microfarad 6.3-volt X5R ceramic capacitor in an 0603-sized package. After replacing the capacitor, the USB port was fully functional again!

Current-Sense Amplifier

After ordering both the INA213 and LP8550 from Texas Instruments, it was only a few days before they arrived in the mail. I desoldered the dead chip from the logic board, cleaned up the pads with some flux and desoldering braid, and installed the new chip. Running Apple Service Diagnostic tools showed that the current-sensing circuit was working correctly.

DC Input Voltage Divider Switch

I didn’t want to buy another transistor pair for Q5490, so I replaced the P-channel half with an ON Semiconductor NTK3142P P-channel MOSFET that I salvaged from an older donor MacBook logic board. I scraped away some solder mask on one of the broken traces heading to the SMC’s voltage divider so I could solder the transistor’s drain terminal to it, and used magnet wire to connect the transistor’s gate and source to their corresponding locations across R5491. R5494, leading to PM_SUS_EN, was found to have a 0-ohm resistor that was open-circuit; this was easily bypassed with a wire jumper across the resistor’s original pads. After cleaning off the flux and performing continuity measurements, I measured the voltage at the SMC’s voltage divider resistors and got a valid voltage reading when I plugged in the AC adapter.

LED Backlight Driver

The LP8550 was up next for repair. I took a 2-amp 0603-sized fuse from a dead hard drive, and used some magnet wire to reattach it to the remnants of F9700, which was a 3-amp fuse originally; note that it’s far safer to use a fuse of a smaller rating instead of a larger one, should a circuit fault still exist.

Tracing the other lines to the LP8550 revealed that R9731 (leading to PPBUS_SW_LCDBKLT_PWR) was open-circuit at a via, which was easily bridged with some solder and magnet wire. R9010 (leading to PPBUS_SW_BKL) was open as well.

After reinstalling the fuse, I actually got the backlight working! However, upon a power cycle I heard a snap, saw a puff of smoke, and lost the original backlight chip. Chances are there was indeed some corrosion residue had caused 50-odd volts to end up on a more sensitive pin on the LP8550. I used an Xacto knife to lightly scratch an outline around the chip, then used copious amounts of flux and desoldered the dead chip with my hot air rework station; I also removed the fuse to help in further troubleshooting to ensure that there weren’t any short-circuits to ground on the backlight circuit. I cleaned up the area with leaded solder and some solder wick, and cleaned up the residual flux in anticipation of the new chip’s installation.

The chip was remarkably easy to install – just get the A1 ball lined up according to the board view, and heat the board to the right temperature. After thoroughly cleaning away the flux from the area, I turned on the MacBook… and let there be (back)light! I power-cycled the computer and the LED backlight remained functional! (And for the record, the fuse didn’t even blow during the entire ordeal.)

FireWire Late-VG Protection Circuit

I considered this issue to be a “WONTFIX“, as I had no use for FireWire connectivity (nor do I have the correct FireWire 800 cables anyway). If I want to sell this computer, I might install a P-channel MOSFET to replace Q4262 (see the LCD Connector section above) in a similar fashion to the DC input voltage-sensing circuitry.

Testing

It takes a little bit of Google-Fu, but with the help of a BitTorrent client, I downloaded the disk images to create an Apple Service Diagnostic (ASD) drive. This is far more sophisticated than the built-in diagnostic when you boot the computer while holding down the D key. With ASD, one has the option to use a stripped-down version of Mac OS X – in a similar vein as WinPE – or a very lightweight UEFI (Universal Extensible Firmware Interface) environment that looks very much like Mac OS 9 and earlier.

It took over half an hour, but all the tests passed without a problem, since all the sensor readings were valid. My MacBook Pro has been restored to working order! I installed Mac OS X High Sierra to a 1TB SSD, and used Boot Camp to run Windows 10 Pro as the default operating system (what can I say, I like Windows ­čÖé ). The Mac Precision Touchpad driver project makes the touchpad a pleasure to use, as the built-in Boot Camp driver provides a much less-comfortable experience.

Conclusion

Much like solving a puzzle, component-level troubleshooting of modern electronics is possible, but this is only feasible if the relevant documentation exists as a good reference point. One can do without them, but the act of reverse engineering isn’t easy if one only has a non-working device.

With the help of a schematic and board view (including the open-source software OpenBoardView), one can easily find what circuits a component belongs to, and where it goes. By following the connections, one can track down the problem(s) with the board, and hopefully save a device from an untimely end in the landfill or a recycling facility.

Right to Repair

This project is an example of why I believe in the right to repair. If I didn’t have (even unofficial) access to schematics, board views, and diagnostic software, I wouldn’t have been able to bring this dead MacBook Pro back to life. However, with a little bit of electronics troubleshooting knowledge and skill, I was delighted that I diverted a discarded dysfunctional device from a demise in the dumpster. In fact, this blog post was written from the MacBook I just repaired!

Discreet Quality: Review of the sketchiest-looking 512GB Lexar SDXC card

It’s amazing how much Flash-based storage technology has advanced in the last few years, especially considering how much prices have dropped.

Naturally, when it comes to speed, capacity and price, consumers tend to look for the lowest price; as manufacturers race towards the bottom line, many will take the low road and sell counterfeit goods. This is especially prevalent in the NAND Flash market, and online marketplaces like eBay, AliExpress and even Amazon are fraught with countless fake storage devices that claim high capacities at too-good-to-be-true prices. It’s not uncommon to see unrealistic capacities sold for a few tens of dollars, but what the customer ends up receiving is a storage device with a falsified capacity that will pass a simple copy-paste test but will corrupt itself with extended use.

While browsing eBay for some deals on some Flash storage, I happened upon a very strange-looking 512GB SDXC card. It was listed as an OEM Lexar card but had no labels, selling for an unprecedentedly low price of $60 USD (the card would cost several times more at normal retail outlets). On the outside, everything about the card’s exterior seems to raise a red flag that the card is not to be trusted.

Lexar OEM 512GB Listing

eBay listing of the Lexar OEM 512GB SDXC card

Upon closer inspection, there are some hints that one shouldn’t always judge a book – er, card – by its cover. The laser-etched markings might look like cryptic gibberish to the layperson, but the markings “SM2702BAC” and “L95B” have actual meanings; the SM2702 is an SD card controller by Silicon Motion, and L95B refers to the 16nm generation of MLC NAND Flash by Micron, which owns the Lexar brand (but unfortunately is being discontinued). The seller also says that the cards have been tested, which is reassuring.

I decided to take the plunge and plunk down about $80 USD including shipping (or $105 CAD at the time) and buy a card for myself.

A Closer Look

After waiting a few weeks, the card showed up in my mailbox. The seller did a very good job packaging it, even placing the card in an ESD shielding bag before wrapping it with foam and placing it in a bubble mailer (it’s much better than the plastic wrap I’ve had some used i7 CPUs by a huge amount).

 

The card looks very plain, with the top label area lacking any labeling, and the same laser-etched markings on the back. The card’s contacts indicate that it has been placed in a card reader a few times before (presumably for testing).

Card Identification

I used my old Gateway M-7305u laptop with Kali Linux to see what information the card reports. These older laptops have true SDA (SD Association) compliant card slots, so they will identify as an actual SD card instead of a USB drive like with many modern laptops; in Linux these show up as devices like /dev/mmcblk0 instead of /dev/sda. By using the “dmesg -wH” command I can read the kernel logs once the card is connected to the computer.

[Jan24 10:52] mmc0: new high speed SDXC card at address 59b4
[ +0.094917] mmcblk0: mmc0:59b4       483 GiB 
[ +0.001111] mmcblk0: p1

The card reports a capacity of 483 GiB (that’s binary gigabytes, or 519.6 decimal – a.k.a. “weasel” – gigabytes), but the SD card name is ”┬á ┬á ┬á” – five ASCII spaces. Everything about the card superficially rings alarm bells! However, I wasn’t phased, and decided to try the card in my Kingston FCR-HS4 USB 3.0 card reader, which uses the Realtek RTS5321 chipset.

Lexar OEM 512GB Partition

OEM Lexar 512GB SDXC card in Disk Management

Examining the card in Windows shows that the card was formatted as exFAT with a drive name of “SDXC”, suggesting it may have been formatted by the seller with the SD Formatter tool. Looking at the raw sector data in Hard Disk Sentinel suggests that the seller indeed do a full capacity test, as the data patterns match that of the program H2testw, an excellent tool for detecting fake Flash memory. This is a good sign – the seller did their due diligence and by this point I already had a good feeling that the card is genuine.

However, I wanted to test this for myself, so I ran the H2testw utility myself and let it run on the card. The write speed remained consistent throughout, which is a good indication that the card is not overwriting memory locations like in fake Flash storage (the card did get uncomfortably hot during the process, however). It took four hours to complete the write and read test, but everything came out clean – the card is genuine, even when every other sign says otherwise!

Lexar 512GB OEM H2testw

H2testw verifying that the OEM Lexar card’s 512GB capacity is genuine

Performance

With the card verified, it was time to put it to the test.

CrystalDiskMark

The card showed sequential read speeds of 92.03 MB/s and sequential write speeds of 60.45 MB/s; the sequential write speed coincides with the seller’s rating of 400x (400 * 150 kB/s = 60 MB/s).

The random 4K I/O performance isn’t great, especially with writes, but it isn’t bad either. The card managed 4K random read speeds of 6.644 MB/s (1700.9 IOPS) and 4K random write speeds of 0.671 MB/s (171.8 IOPS).

Lexar 512GB OEM Benchmark

Benchmark of the 512GB Lexar OEM SDXC card in CrystalDiskMark 3.0.4

Conclusion

In the end, I was satisfied – I got a 512GB SDXC memory card at a fraction of the cost from a normal retail outlet. It’s not exactly a speed demon, but it’s not a slowpoke either. The looks may be deterring for most folks (and rightly so), but with the right tools and knowledge, one can pick up one of these less aesthetically-pleasing memory cards and save some serious coin in the process.

eMMC Adventures, Episode 4: Recovering data from physically damaged BGA eMMC Flash storage chips

As seen on Hackaday!

The ball grid array (BGA) chip package has been instrumental in getting modern electronics to fit in smaller and smaller spaces, as it uses tiny balls of solder on the bottom of the package to make electrical connections, instead of copper leads on the edge of the chip package. This allows for hundreds of connections to be made in a small amount of PCB area, but their size also makes them very vulnerable to damage as well.

One common way for BGA chips to become damaged is called “pad cratering“, where the copper pad on the package’s substrate (basically a wafer-thin circuit board) separates and leaves behind a crater.

Continue reading

eMMC Adventures, Episode 3: Building a custom adapter to use cheap eMMC-based 32GB SSD modules

As seen on Hackaday!

While on my quest for more eMMC-based storage devices, I stumbled upon a few devices that piqued my interest: eMMC-based SATA SSDs! I found two models of particular interest: Dell had M.2 modules with a 2.5″ adapter, and HP had custom boards intended for use in cheap laptops (for example, the HP┬á14-an012nr). Although the former was easier for me to use (but not acquire), I will be focusing on the latter in this blog post.
Continue reading

eMMC Adventures, Episode 2: Resurrecting a dead Intel Atom-based tablet by replacing failed eMMC storage

As seen on Hackaday!

Recently, I purchased a cheap Intel Atom-based Windows 8 tablet (the DigiLand DL801W) that was being sold at a very low price ($15 USD, although the shipping to Canada negated much of the savings) because it would not boot into Windows – rather, it would only boot into the UEFI shell and cannot be interacted with without an external USB keyboard/mouse.

The patient, er, tablet

The tablet in question is a DigiLand DL801W (identified as a Lightcomm DL801W in the UEFI/BIOS data). It uses an Intel Atom Z3735F┬á– a 1.33GHz quad-core tablet SoC (system-on-chip), 16GB of eMMC storage and a paltry 1GB of DDR3L-1333 SDRAM. It sports a 4500 mAh single-cell Li-ion battery, an 8″ 800×1200 display, 802.11b/g/n Wi-Fi using an SDIO chipset, two cameras, one microphone, mono speaker, stereo headphone jack and a single micro-USB port with USB On-The-Go support (this allows the port to act as a USB host port, allowing connections with standard USB devices like keyboards, mice, and USB drives).

Continue reading

eMMC Adventures, Episode 1: Building my own 64GB memory card with a $6 eMMC chip

As seen on Hackaday!

There’s always some electronics topic that I end up focusing all my efforts on (at least for a certain time), and that topic is now eMMC NAND Flash memory.

Overview

eMMC┬á(sometimes shown as e.MMC or e-MMC) stands for┬áEmbedded┬áMultiMediaCard; some manufacturers create their own name like SanDisk’s iNAND or Hynix’s e-NAND. It’s a very common form of Flash storage in smartphones and tablets, even lower-end laptops. The newer versions of the eMMC standard (4.5, 5.0 and 5.1) have placed greater emphasis on random small-block I/O (IOPS, or Input/Output operations per second; eMMC devices can now provide SSD-like performance┬á(>10 MB/s 4KB read/write) without the higher cost and power consumption of a full SATA- or PCIe-based SSD.

MMC and eMMC storage is closely related to the SD card standard everyone knows today. In fact, SD hosts will often be able to use MMC devices without modification (electrically, they are the same, but software-wise SD has a slightly different feature set; for example SD cards have CPRM copy protection but lack the MMC’s TRIM and Secure Erase commands. The “e” in eMMC refers to the fact that the memory is a BGA chip directly soldered (embedded) to the motherboard (this also prevents it from being easily upgraded without the proper tools and know-how.

Continue reading

Ramble: Fixstars’ 6TB SATA SSD – is it a thing?

If you know me personally, you’ll know that I absolutely love SSDs. Every PC I own has one, and I can’t stand to use a computer that runs off an HDD anymore. Naturally, when I read about a 6 TERABYTE SSD coming out, it piqued my curiosity.

Photo is owned by Fixstars and is not my property. Retrieved from http://www.fixstars.com/en/news/wp-content/uploads/2015/05/SSD-6000M.png

Official SSD-6000M promotional photo, taken from Fixstars’ press release

A Japanese company by the name of Fixstar has recently announced the world’s first 6TB SATA-based SSD. Although 2.5″ SSDs in such a capacity range already exist, they’re SAS (Serial Attached SCSI) based which limits them primarily to server/datacenter usage. According to Fixstars’ press release, their SSD-6000M supports sequential read speeds of 540 MB/s, and sequential write speeds of 520 MB/s, which is on par with most modern SATA III (6 Gbps) SSDs on the market today.

Concerns

However, after reading a bit online, I’m beginning to have some concerns about the drive’s real-world performance. One thing that is rather worrying is that the company has only mentioned sequential I/O speeds and has said nothing on random I/O or read/write latency; although SSDs do have much better sequential speeds than their mechanical spinning counterparts, they really shine when it comes to random I/O (which makes up much of a computer’s typical day-to-day usage). In the early, early days of SSDs, manufacturers cared only about sequential I/O and it resulted in some SSDs that were absolutely terrible when it came to random I/O (fun fact: I once had an early SSD, the Patriot PS-100, and its performance was so bad that it actually turned me off of SSDs for a few years, so I know how bad such unoptimized SSDs can perform).

Construction

The SSD appears to be made up of 52 eMMC (embedded MultiMediaCard) chips in a sort of RAID 0 configuration and an FPGA (field-programmable gate array) as the main controller. In layman’s terms, this SSD is literally made up of a bunch of SD cards “strapped” together with a chip so that it appears as one single drive. In that sense, one can make a similar solution using a board like this, which parallels multiple microSD cards to act as a single ‘SSD’.

Image retrieved from Amazon (http://ecx.images-amazon.com/images/I/51y0QqWL5sL.jpg)

The consumer equivalent of the SSD-6000M: SD cards and a controller chip. You can even get them from Amazon.

Conclusion

I’m wary of how well this SSD is going to take off. It could end up being a tremendous success, but it’ll certainly be out of the reach of the consumer market – either by its potentially poor random I/O performance, or its price (apparently it will cost well over $6000 USD).

Review of SanDisk Extreme CompactFlash 32GB (SDCFXS-032G)

After my previous review of a Silicon Power 8GB CompactFlash memory card, I was looking around for more CF cards to review, in the hopes of finding a higher-performing card with S.M.A.R.T. health reporting and the ability of acting as a “fixed disk” (that is, identifying to the system as a hard drive rather than a removable disk), and decided to purchase this memory card from Amazon.

Advertised specifications

The card’s specifications indicate that the CompactFlash card is capable of 120MB/s sequential read and 60MB/s sequential write speeds, has a lifetime warranty and comes with a license key for a 1-year subscription to their RescuePRO data recovery software. It is advertised to have internal RTV (room-temperature vulcanization) silicone potting, has an operational temperature range of -25 to 85 degrees Celsius (-13 to 185 Fahrenheit), and uses their “ESP (Enhanced Super-Parallel) Technology” which I presume is some sort of proprietary multi-channel controller, and is UDMA 7 (167 MB/s maximum interface speed) capable.

Benchmark – Setup

To connect the card to my computer, I used a CompactFlash-to-IDE converter and a Marvell 88SE9128-based SATA/PATA host bus adapter. This allows me to use up to UDMA 6 (133 MB/s maximum interface speed) as UDMA 7 is basically restricted to cameras as it’s only part of the CompactFlash official specifications.

Benchmark – CrystalDiskMark

For this test, I manually zero-filled the card using Hard Disk Sentinel, formatted it with exFAT, then ran CrystalDiskMark, set to 3 runs with a 500MB file size using random data, all zeros (0x00), and all ones (0xFF).

Data Type Test Read (MB/s) Write (MB/s) IOPS Read IOPS Write
Random Sequential 103.2 52.45
512K Random 99.55 29.57
4K Random (QD1) 11.37 0.916 2775.2 223.6
4K Random (QD32) 17.24 1.413 4208.2 344.9
All 0 (0x00) Sequential 104.3 54.25
512K Random 98.27 31.22
4K Random (QD1) 11.36 1.1 2773.3 268.5
4K Random (QD32) 17.39 1.263 4244.5 308.4
All 1 (0xFF) Sequential 104.5 53.95
512K Random 98.05 25.84
4K Random (QD1) 11.19 1.112 2733 271.4
4K Random (QD32) 17.32 1.437 4229.3 351

It appears that there is no significant difference between the tests depending on what data was used for the benchmark.

Benchmark – AS SSD

As with CrystalDiskMark, I zeroed out the card and formatted it as exFAT before running the test.

Test Read Write
Sequential 99.70 MB/s 46.13 MB/s
4K 11.40 MB/s 0.74 MB/s
4K 64 Thread 12.80 MB/s 1.03 MB/s
Access Time 0.389 ms 5.504 ms
Score 34 6
61

Benchmark – Hard Disk Sentinel

I ran three separate benchmarks with Hard Disk Sentinel’s Surface Test feature, using the read and write (both empty and random data) tests, and used the Random Seek Test to measure the card’s responsiveness after filling it with empty and random data.

Test Speed
Read 0x00 95.20 MB/s
Read Random 97.30 MB/s
Write 0x00 49.81 MB/s
Write Random 49.04 MB/s
Seek Time 0x00 0.35 ms
Seek Time Random 0.37 ms

Once again, there does not appear to be any appreciable difference between an empty (zeroed-out) or full card.

Analysis – HWiNFO64

Now that the benchmarks are out of the way, let’s take a look at the card and what it can (and can’t) do. Let’s take a look at the details of the drive…

The card shows up as a regular IDE drive in HWiNFO, and has information about its CHS (Cylinder-Head-Sector) geometries and supported I/O interface speeds. Here we can see the card supports up to UDMA 7 but is running at UDMA 6 as because it is connected to a PC IDE bus.

Now for the kicker: Does the drive identify itself as a fixed or removable disk? Cross your fingers…

NOPE! The SanDisk Extreme CompactFlash card does NOT identify as a fixed disk, but instead as a removable drive. This means that the hopes of using this as a bootable Windows disk are now out the window. [ba-dum-tssh!]

Analysis – Hard Disk Sentinel

Looking at the Overview tab in HDS, something weird is happening. It states that “the hard disk status is PERFECT” yet it has no health or performance percentages available. If I open the Information tab, I can see that the SanDisk Extreme CompactFlash card does NOT support S.M.A.R.T. health reporting. Bummer. Additionally, it appears that Windows does not like removable IDE drives that lack S.M.A.R.T. and instead report garbage data (or data mirrored from another drive in the system).

Looking further inside the Information tab, we can see the features that the memory card does support. It supports DMA, Ultra DMA, APM (advanced power management), write caching, 48-bit LBA (logical block address) addressing, IORDY (flow control), a NOP (no-operation) command, and has the CFA (CompactFlash Association) feature set.

Since the card reported that it supported APM, I tried to enable it but the card refused to accept the command.

Conclusion

Overall, I like this card quite a bit. It has fast sequential I/O and a respectable random read speed. However, this is soiled by the fact that the card is configured to show up as a removable disk, which renders the card unusable as a Windows boot drive, and the lack of S.M.A.R.T. health and temperature reporting makes me a bit uneasy as I cannot track the card’s program-erase cycle count during use.

Oh well. Looks like the hunt for a fast, fixed-disk CompactFlash card continues…

Teardown/review of Silicon Power 8GB 200x CompactFlash memory card

Hooray for nice hand-me-down SLR cameras! I finally have a better camera than the one built into my (now ancient) Samsung Galaxy S II that I use for pictures on this blog. The camera, a Canon EOS 50D, had an 8GB CompactFlash card that I was preparing to erase and reuse, and had problems trying to read out the card’s contents; a few stubborn files would refuse to copy and Explorer would simply hang until I restarted the program or unplugged the card. Additionally, when using my Hard Disk Sentinel program to do a surface scan, it too would freeze when reading a certain sector on the card.

Instead of using a USB-to-CompactFlash adapter (I could not find my card reader and have not seen it for over a year now, come to think of it) I used a CompactFlash-to-PATA adapter, then a PATA-to-SATA adapter so I could directly hook up the card to my computer. In addition to having greater theoretical throughput, it allows me to view the S.M.A.R.T. diagnostic data that the card provides.

Memory card issues and performance

The diagnostic information doesn’t really provide any insight into the health of the card; none of the S.M.A.R.T. attributes are listed as critical, and many of them are listed as vendor-specific. Oh well, at least it gave me some sort of information…

After finding a copy of the card’s contents on my home server (I seem to have previously backed up the card before the corruption occurred but didn’t recall doing so until I had raked through some of my archives), I decided I’d do a full card erase and see if it would cause the card to be usable again. I called up the Surface Test in Hard Disk Sentinel and used its surface-write tool to erase the user-accessible area of the card. A few blocks seemed to write dramatically slower than the rest and repeated write tests did not resolve their sluggishness; I call shenanigans with the memory card’s controller and its reluctance in reallocating problematic sectors…

The card itself isn’t very fast. The sequential I/O of the card is good enough for casual photography, but I would definitely not use this card in an embedded system that uses a CompactFlash as a sort of mini-SSD; even though it shows up in my system as a hard drive (non-removable), its random I/O is quite sluggish and its random write speed is worse than that of a standard hard disk drive.

Teardown

The card itself is a sandwich of aluminum plates, a plastic case and the PCB assembly that holds the controller, Flash memory and the CompactFlash connector. A hobby knife run under the aluminum plate was able to separate the plate from the plastic body; some glue and a couple clips were the only things holding the card together.

The card’s controller is a Phison PS3006, which sports a PCMCIA (and therefore CompactFlash) interface with True IDE (or plain PATA) support. It contains an 8051 microcontroller core with a few components to assist with interfacing with the Flash memory, such as a hardware ECC (error correction code) engine and a small amount of SRAM for a buffer.

The datasheet for the PS3006 doesn’t provide information on the S.M.A.R.T. attributes, nor does it indicate what type of Flash wear-leveling is provided. Given the controller’s limited computing capabilities, I’m thinking it uses a less-complex but less-reliable form of wear leveling, known as dynamic wear leveling (see Micron’s application note for more information). It’s less capable of dealing with memory wearout, but doesn’t require the computing overhead of static wear leveling (which proper SSD controllers use to keep performance up).

The memory is an Intel 29F32G08AAMD2 device, which is an asynchronous MLC NAND Flash memory chip. There are two installed on this card with another two footprints on the PCB being unpopulated, suggesting that the 16GB version of this card has all four footprints populated.

Conclusion

Given the simplicity of the card, I don’t really have much else to add about this card. Either way, it’s lost my trust with regards to holding my photos. I bought a NOS Disk 16GB CF card from Amazon as well as a SanDisk Extreme 32GB, and plan to use the latter to hold my photos, with the former mainly being a simple curiosity of the construction of a card from a lesser-known manufacturer. Hopefully those will also provide S.M.A.R.T. data, as I prefer Flash-based storage devices with some sort of S.M.A.R.T. data capability. (Is it an insatiable thirst for knowledge? A means of doing regular ‘check-ups’ on my storage device? Probably the latter, but maaayyyybe the former as well. ­čÖé )

A Little Pick-Me-Up: Samsung 840 EVO SSD slowdowns, and how to fix it (for now…)

There’s been word going around that Samsung’s 840 EVO solid-state drives have an issue where they become really, really slow to read if the data on it has been sitting around for a few months, and I can confirm this is the case as well.

The first half of the drive (which holds a fair amount of static data) was being read at around 30 MB/s, with newer data being read at almost 500 MB/s. That’s a pretty big difference. One thing to note (I didn’t take a screenshot for this) is that although the overall read speed was significantly affected, the read latency was only somewhat slower; only about 10-20 microseconds of extra latency.

To temporarily fix this (at least until Samsung releases a firmware update in the middle of October), I used Hard Disk Sentinel to read and rewrite all of the data on the SSD. Because this involves accessing data that is normally locked by Windows, I made a custom WinPE (a slimmed-down, portable version of Windows that’s used for installation and recovery) image with Hard Disk Sentinel inside it. This allowed me to boot outside of the normal Windows setup, and perform the Read+Write+Read test to refresh all of the data stored on the SSD. Note that this will impart a lot of write activity to the NAND flash in the SSD (hence a chance for increasing wear), but modern SSDs aren’t as delicate as people might think.

HD Sentinel's Refresh Data Area test

Hard Disk Sentinel’s “Refresh Data Area” test

This took about 2 hours on my 250 GB SSD. Afterwards, another read test showed that the drive was working smoothly again.

Will I still buy a Samsung SSD? Absolutely. No data was lost and Samsung did the right thing by acknowledging the issue and also finding a way to fix it, as opposed to simply calling it a non-issue and sweeping it under the rug.

(Part 2 of 2) Microdrive Adventures: Looking into (and butchering) the Hitachi Microdrive and Seagate ST1 CompactFlash hard drives

(Part 1 viewable here)

Content advisory: electronics gore! ­čśÇ

refundI sent screenshots from Hard Disk Sentinel to the seller of the microdrives, and they refunded my money but didn’t want the drives back. Even then, it’d probably be a good idea to destroy the drives since re-use of them would be a bit… fraudulent after getting refunded. I decided to throw the drives around to see how well they’d hold up to physical abuse.

The Microdrive died when I whipped it against the concrete floor of my basement, go figure. The impact was strong enough to bend the steel frame but not enough to shatter the glass hard disk inside. Obviously, the disk didn’t spin up or enumerate in Hard Disk Sentinel. Now that the drive’s murder has been accomplished, it’s autopsy time!

The Seagate ST1 was put through a similar treatment, but it died much less gracefully when plugged in. The main controller chip (I think) shorted internally, and after about 15 seconds of being powered up, it released the magic smoke. The board’s plastic liner was melted where the chip shorted out. The drive internals weren’t much different than the Hitachi drives so I didn’t bother taking pictures of the drive’s insides.

After the damage was done, the drives were promptly put in a small plastic bag to be put in an electronics recycle bin.

(Part 1 of 2) Microdrive Adventures: Looking into (and butchering) the Hitachi Microdrive and Seagate ST1 CompactFlash hard drives

A few weeks ago I decided to hop onto eBay and buy a couple microdrives for fun. If you haven’t heard of the term, a microdrive is a hard disk drive that fits into a CompactFlash slot. These were intended to be the future in mobile storage, with 20 GB drives being the biggest around 2006. Of course, these drives proved to be very delicate, and besides, now we get 128 GB microSD cards!

The drives I purchased appeared to be pulled from some old iPod minis. The seller tried to remove the Apple logo with some sort of solvent, but left the smudges behind.

The problem with the iPod mini drives is that their CompactFlash interface is disabled. That is, the drive is really just a PATA drive in a CompactFlash’s body. Few devices that aren’t PCs support CompactFlash cards in this mode.

Being the curious type, I popped the drives into my Sony Clie NX73V, which I still carry with me even though it’s 11 years old ­čÖé . It has support for CompactFlash Type I and II (thin and thick, basically), and, according to the properties window in the OS, uses the ATA protocol to talk to the cards. This means it should interface with the cards just fine… right?

First, I popped the Hitachi Microdrive in my Clie. One second after inserting the card, I see a question mark in the memory card’s taskbar icon. No dice.

Then, I moved on to the Seagate ST1. It spun up, but the Clie hung for about 30 seconds before finally displaying “The card cannot be recognized”. However, it did at least enumerate with the OS and I could pull up the manufacturer and model number of the drive.

Hm, well those ideas were dashed pretty quickly. Later, I bought a CompactFlash-to-PATA adapter, and a PATA-to-SATA adapter so I could hook it up to my laptop. From there, I used Hard Disk Sentinel (great software, by the way!) to analyze the drives and see if they have S.M.A.R.T. health reporting…

… and they do, alright! In fact, the drives I purchased were both soon to be dead. The Seagate drive had hundreds of bad sectors and a failing disk head/head actuator. The Hitachi drives had so many reallocated sectors that the drive literally ran out of spares. Too bad the Microdrive didn’t report how many sectors were reallocated though…

The drives themselves were in really bad shape, as seen below:

In the next part, I’ll show the aftermath of both drives. (Content Advisory: electronics gore)