Archive for the ‘Hardware’ Category

Apple Watch: Waterproof or not? - March 30th, 2015

A few of us today at work discussed the upcoming Apple Watch (pre-orders from 10th April) and the topic of water-proofing came up.

A little research on Google reveals no “definitive” answer, so I took matters into my own hands and hunted the main official Apple Watch (web-)pages for any hint.

Sure enough I found this:

*Apple Watch is splash and water resistant but not waterproof. You can, for example, wear and use Apple Watch during exercise, in the rain, and while washing your hands, but submerging Apple Watch is not recommended. Apple Watch has a water resistance rating of IPX7 under IEC standard 60529. The leather bands are not water resistant.

Cite: Footnote on https://www.apple.com/watch/health-and-fitness/ accessed 2015-03-30

Wikipedia helpfully explains the meaning of “IPX7”:

Where there is no data available to specify a protection rating with regard to one of the criteria, the digit is replaced with the letter X.
The first digit indicates the level of protection that the enclosure provides against access to hazardous parts (e.g., electrical conductors, moving parts) and the ingress of solid foreign objects.
The second digit indicates the level of protection that the enclosure provides against harmful ingress of water.

Cite: http://en.wikipedia.org/wiki/IP_Code#Liquid_ingress_protection

So no data available re. solid particles.

Regarding water though there is, it’s rated at “7”:

Immersion up to 1 m

Ingress of water in harmful quantity shall not be possible when the enclosure is immersed in water under defined conditions of pressure and time (up to 1 m of submersion). Test duration: 30 minutes

The lowest point of enclosures with a height less than 850 mm is located 1000 mm below the surface of the water, the highest point of enclosures with a height equal to or greater than 850 mm is located 150 mm below the surface of the water

So “waterproof”?  I’d say not.  “Survive use in the shower and bath”?  Absolutely.   “Come out of a swimming pool working”?  I’d personally not risk £400 on it.

For cross-reference against Wikipedia, this PDF explains in a very readable fashion: http://www.osram.co.uk/media/resource/hires/342330/technical-application-guide—ip-codes-in-accordance-with-iec-60529-gb.pdf

Reconstructing heavily damaged hard drives - July 3rd, 2008

[EDIT: Hey guys, thanks for the feedback! Someone over at virtuallyhyper.com has an awesome write up that deals with SD cards specifically (but is highly relevant to hard drives too), with a set of much improved and updated scripts. I’d strongly recommend taking a look … Recover files from an SD card using Linux utilities]

Recover data even when an NTFS (or other) won’t mount due to partial hard drive failure.

This was born when someone brought me a near dead hard drive (a serious number of read errors, so bad that nothing could mount or fix the filesystem), asking if I could recover any data.

Now obviously (as almost any geek would know), the answer of course is a very likely yes. There are many ways of recovering data. One such way (which I performed) is using Foremost to find files based on their headers and structures. While this technique works really quite well, it does miss a lot of files, fragment others up, leave bits out and generally not retrieve any metadata (such as filenames etc).

This makes Matt mad. No filenames == days of renaming files.

So I booted up Helix, created a quick image of the drive to a 500GB external drive, and tried running Autopsy (the GUI of Sleuthkit). This is where things got interesting.

I say interesting, because Sleuthkit couldn’t read the filesystem. But it could retrieve the inodes, and the metadata along with them. And it could accordingly retrieve the data content of (some) files.

Observing this, I realized there was a high probability that I could somehow use Sleuthkit’s command line tools to retrieve the files which were not on bad clusters and recover the filenames from the inode. As it turns out, this wasn’t such a bad idea!

There are 3 tools which proved useful:

  • ils
  • ffind
  • icat

ils “lists inode information” from the image, ffind “finds the name of the file or directory using the given inode” and icat “outputs the content of the file based on it’s inode number”. Using these three tools and a bit of bash, we can grab a list of inodes, get the filename from the metadata, create the directory structure beneath it, extract the file content, move on to the next.

So for this task I knocked up the following (really ugly, potentially unsafe) script:

#!/bin/sh
for inode in $(cat /tmp/inodes) ; do
 
/KNOPPIX/usr/local/sleuthkit-2.09/bin/ffind /dev/hda1 $inode
 
if [ $? -eq 0 ]
then
	echo "INODE: $inode"
	INODEDIR=`/KNOPPIX/usr/local/sleuthkit-2.09/bin/ffind /dev/hda1 $inode`
 
	REALDIR=/mnt/out`dirname "$INODEDIR"`
	FILENAME="/mnt/out$INODEDIR"
	mkdir -p "$REALDIR"
 
	echo "FILENAME: $FILENAME"
	/KNOPPIX/usr/local/sleuthkit-2.09/bin/icat /dev/hda1 $inode > "$FILENAME"
 
	if [ `du "$FILENAME" | awk '{print $1}'` == 1 ]
	then
		rm "$FILENAME"
		mkdir -p "$FILENAME"
	fi
	echo ""
fi
done

Really, I do warn you, take serious care running this!

It needs a lot of work, but enough is there for it to function. It reads a file of inode numbers (one per line) and uses ffind to get the filename. We extract the path, attempt to create it, output the file content and (this is important), take a wild guess at if the inode was a directory. Please note this is wildly inaccurate and needs serious rethinking! Currently we look at the file size, and assume directories alone use 1 byte.

We can populate a file with inode numbers like so:

ils -a /dev/hda1 | awk -F '|' '{print $1}' > /tmp/inodes

(Users of Helix will need to use the full pathname to ils as in the above script).

At some point (no garuntees when) I’ll tidy up the script and make it more bullet proof. In the meantime, I hope this saves some data!

Remember: No matter how much data you have, it’s always better to have 2 hard drives of half the size, mirrored than it is to have one large expensive drive. They will die unexpectedly! When you next buy a bigger hard drive, consider this: 1x500GB drive will loose you 500GB of data. 2x250GB will 99.9% probability loose you nothing. So if you’re on a tight budget, buy twice smaller. If you’ve a lot of money, buy twice big.

Oh, and always make regular backups. Cheap USB drives are good for this!

MacBook case crack - June 29th, 2008

Crackbook?  Curtsey of Engadget.

To celebrate my 2:1 degree (praise the Lord), my MacBook decided to aquire a none to small crack on the wrist rest. Naturally Apple have spent the weekend getting new parts and fixing it for free, but one does have to wonder if they’re falling victim to cheap materials?

Epson AL-C1100 efficiency - April 2nd, 2008

Today we discovered that Epson’s wonderfully affordable colour LaserJet printer (the AL-C1100, networked) isn’t actually as efficient as they claim.

Epson are quick to promote the number of pages per cartridge this bessy can do (which I agree, is very good for a colour laser) but they somehow forgot to mention the Photoconductor unit, which costs £150 to replace, only lasts 15,000 pages. That’s another 1p per page, or 2p per double sided.

Ouchies.

So before you go buying one of these printers, when you’re calculating the cost per page, don’t forget to add 1p to your total, for that stupid photoconductor unit.