Sometimes there’s just no substitute for the right diagnostic tool. [Ankit] was trying to port some I2C code from an Arduino platform to an ARM chip. When the latter code wasn’t working, he got clever and wrote a small sketch for the Arduino which would echo each byte that came across I2C out to the serial line. The bytes all looked right, yet the OLED still wasn’t working.
Time to bring out the right tool for the job: a logic analyzer or oscilloscope. Once he did that, the problem was obvious (see banner image — Arduino on top, ARM on bottom): he misunderstood what the ARM code was doing and was accidentally sending an I2C stop/start signal between two bytes. With that figured, he was on the right track in no time.
We just ran an epic post on troubleshooting I2C, and we’ll absolutely attest to the utility of having a scope or logic analyzer on hand when debugging communications. If you suspect that the bits aren’t going where they’re supposed to, there’s one way to find out. It’s conceivable that [Ankit] could have dug his way through the AVR’s hardware I2C peripheral documentation and managed to find the status codes that would have also given him the same insight, but it’s often the case that putting a scope on it is the quick and easy way out.
Filed under: Microcontrollers
The first time I was in school for electrical engineering (long story), I had a professor who had never worked in the industry. I was in her class and the topic of the day was measuring AC waveforms. We got to see some sine waves centered on zero volts and were taught that the peak voltage was the magnitude of the voltage above zero. The peak to peak was the voltage from–surprise–the top peak to the bottom peak, which was double the peak voltage. Then there was root-mean-square (RMS) voltage. For those nice sine waves, you took the peak voltage and divided by the square root of two, 1.414 or so.
You know that kid in the front of the class? They were in your class, too. Always raising their hand with some question. That kid raised his hand and asked the simple question: why do we care about RMS voltage? I was stunned when I heard the professor answer, “I think it is because it is so easy to divide by the square root of two.”So What’s the Right Answer?
This made me really angry. I was paying good money to be there and that was the answer? Even at that young age I knew better. There are two things wrong with the professor’s answer. First, the dividing by the square root of two is only valid on the pretty sine waves we were studying. Any more complex waveform required calculus to get the right answer. For example, a triangle wave’s RMS voltage is the peak value divided by the square root of three.
However, the biggest problem is that the answer is nonsense. It is even easier to divide by ten, but that’s of no value. The reason you want to measure RMS voltage is simple: 1 V RMS does the same amount of work in a load as 1 V DC. So a resistor subjected to 1 V DC and 1 V RMS will generate the same amount of heat, for example.So?
Not only is this practical, but it makes Ohm’s law continue to work. We know that power is I times E. Since I is also equal to E/R, you can deduce that power is E (the voltage) squared over the resistance. But in an AC circuit, what is that voltage? It isn’t peak or peak-to-peak–that would give a wrong answer. It is, instead, the RMS voltage.
Take another example. If you look up common RMS values for different waveforms, you’ll see the RMS voltage of a square wave with peak voltage V is just V. That’s because a real square wave goes in equal amounts positive and negative. So a 5 V square wave, for example, is always at either 5 V or -5 V and, either way, the same amount of work gets done.
But what about a pulse train from 0 V to 5 V with a duty cycle of 50%? Now the RMS value is the peak voltage (5 V) time the square root of the duty cycle (0.5) or about 3.5 V. The calculus can get hairy, but if you have a set of discrete measurements (as you probably do in any real-life situation) you simply apply the name backward. Square each sample. Find the mean (average). Then take the square root.
Consider the pulse train example. With eight samples taken at twice the frequency of the PWM, you’d expect to get four 5 V readings and four 0 V readings. If you square these samples, you get four values of 25 and four that are still 0. So the average will be 100/8 or 12.5 and the square root of 12.5 is about 3.5. That matches the answer from the table (that is, five times the square root of 0.5).Meters
You can compute RMS voltage using an oscilloscope. However, with a meter, it can be tricky depending on what the meter measures. Most meters that don’t claim to measure RMS, read the average value of the voltage. Some meters measure RMS but only for a sine wave. Older true-RMS meters used thermal or electrodynamic methods to measure the RMS value. However, modern meters are usually adept at measuring RMS, at least for pure AC signals. A very few meters will have an option to measure RMS voltage for signals with a DC component, like an offset.
According to Fluke, an average-reading meter will read 10% high on a square wave, 40% low on the output of a single-phase diode rectifier, and anywhere from 5% to 30% low on the output of a three-phase diode rectifier. Big difference.A Mad Professor (as in Angry)
In those days, I wasn’t smart enough to hold my tongue, so I raised my hand from the back of the class and explained the above (perhaps a little more succinctly). The response from the professor: “Oh yeah. That too.” I won’t mention the name of the school or the professor, but it did prompt me to find a new school.
With the increased use of digital instrumentation and calculators, there is a propensity for spewing off numbers without thinking about what they mean. In this case, saying that an AC voltage is 20 volts isn’t really a complete answer. We need to know what kind of voltage measurement the 20 represents and we also need to understand what that means and how it fits for the question at hand.
If you want to get more into the calculus, you might enjoy [Darryl Morrell’s] video, below.
Photo credit: sine wave graph by [AlanM1] (CC BY-SA 3.0 )]
Filed under: Hackaday Columns, rants
Do any of you stay awake at night agonizing over how the keytar could get even cooler? The 80s are over, so we know none of us do. Yet here we are, [James Cochrane] has gone out and turned a HP ScanJet Keytar for no apparent reason other than he thought it’d be cool. Don’t bring the 80’s back [James], the world is still recovering from the last time.
Kidding aside (except for the part of not bringing the 80s back), the keytar build is simple, but pretty cool. [James] took an Arduino, a MIDI interface, and a stepper motor driver and integrated it into some of the scanner’s original features. The travel that used to run the optics back and forth now produce the sound; the case of the scanner provides the resonance. He uses a sensor to detect when he’s at the end of the scanner’s travel and it instantly reverses to avoid collision.
A off-the-shelf MIDI keyboard acts as the input for the instrument. As you can hear in the video after the break; it’s not the worst sounding instrument in this age of digital music. As a bonus, he has an additional tutorial on making any stepper motor a MIDI device at the end of the video.
If you don’t have an HP ScanJet lying around, but you are up to your ears in surplus Commodore 64s, we’ve got another build you should check out.
Filed under: Arduino Hacks, digital audio hacks, musical hacks
A SCARA (Selective Compliance Assembly Robot Arm) is a type of articulated robot arm first developed in the early ’80s for use in industrial assembly and production applications. All robotics designs have their strengths and their weaknesses, and the SCARA layout was designed to be rigid in the Z axis, while allowing for flexibility in the X and Y axes. This design lends itself well to tasks where quick and flexible horizontal movement is needed, but vertical strength and rigidity is also necessary.
This is in contrast to other designs, such as fully articulated arms (which need to rotate to reach into tight spots) and cartesian overhead-gantry types (like in a CNC mill), which require a lot of rigidity in every axis. SCARA robots are particularly useful for pick-and-place tasks, as well as a wide range of fabrication jobs that aren’t subjected to the stress of side-loading, like plasma cutting or welding. Unfortunately, industrial-quality SCARA arms aren’t exactly cheap or readily available to the hobbyist; but, that might just be changing soon with the Creo Arm.
Slated to go on the market soon as a fully assembled unit, the Creo Arm will also be released as an open source design that you can build yourself. The creator of this SCARA robot, [anfroholic], says the fully assembled production version should retail for less than $4,000. And that’s for an arm that has a claimed positional repeatability of .03mm at a distance of 600mm. Current documentation is fairly sparse, but [anfroholic] has a number of videos showcasing the different functions he has developed, which serves as a nice proof of concept.
Filed under: robots hacks
A few years ago, thermal imaging sensors – cameras that could see heat – became very cheap. FLIR was going all-in with their Lepton module, and there were a number of clip-on cellphone accessories that gave the computer in your pocket the ability to see infrared.
Fast forward a few years, and you can still buy a thermal imaging sensor for your cellphone, and it still costs about the same as it did in 2013. For his Hackaday Prize project, [Josh] is building a more modern lower cost thermal imaging camera. It won’t have the resolution of the fancy $1000 FLIR unit, but it will be very inexpensive with a BOM cost of about $50.
[Josh] is building his low-cost thermal camera around Panasonic’s Grid-EYE module. This thermopile array contains 64 individual infrared sensors, giving this ‘camera’ a resolution of 8×8 pixels. That’s nothing compared to the thousands of pixels found in devices using the FLIR Lepton, but the Grid-EYE is very cheap.
Right now, [Josh]’s build is using an ARM Cortex M0+ and a cheap touch screen LCD he picked up from AliExpress. There’s an optional component to this build in the form of a visible light camera, giving [Josh] the ability to overlay thermal sensor data over a visible light image, just like the fancier, more expensive units.
With a total BOM cost of $44.50, [Josh]’s build is easy on the pocketbook, but still good enough to get some useful information. It’s a great build, and a great entry for The Hackaday Prize.The HackadayPrize2016 is Sponsored by:
Filed under: The Hackaday Prize
[EssentialCraftsman] is relatively new to YouTube, but he’s already put out some impressive videos. We really enjoyed an episode dedicated to a fixture in his shop, his large custom blacksmith’s forge.
The forge is a custom cast vault of refractory that sits on a platter of fire bricks suspended on a heavy-duty rotating frame. Two forced air natural gas burner provide the heat. The frame is plasma CNC cut steel welded together.
A lot of technical challenges had to be solved. How does one hold a couple hundred pound piece of refractory in such a way that it can be lifted, especially when any steel parts exposed to the heat of the forge would become plastic and fail? When the forge turns off, how do you keep the hot air in the forge from rising into the blowers and melting them? There were many more.
We were really impressed by the polished final appearance of the forge, and the cleverness of its design. Everything is well thought out, and you can even increase the height of the forge by propping it up on more fire bricks. We hope [EssentialCraftsman] will continue to produce such high quality videos. We also enjoyed his episode on Anvils as well as a weirdly informative tirade on which shape of stake (round or square) to use when laying out concrete jobs. Videos after the break.
The more you know…
Filed under: classic hacks
We were doing our daily harvest of YouTube for fresh hacks when we stumbled on a video that eventually led us to this conversion of a 1980s Armatron robot to steam power.
The video in question was of [The 8-bit Guy] doing a small restoration of a 1984 Radio Shack Armatron toy. Expecting a mess of wiring we were absolutely surprised to discover that the internals of the arm were all mechanical with only a single electric motor. Perhaps the motors were more expensive back then?The resemblance is uncanny.
The arm is driven by a Sarlacc Pit of planetary gears. These in turn are driven by a clever synchronized transmission. It’s very, very cool. We, admittedly, fell down the google rabbit hole. There are some great pictures of the internals here. Whoever designed this was very clever.
The robot arm can do full 360 rotations at every joint that supports it without slip rings. The copper shafts were also interesting. It’s a sort of history lesson on the prices of metal and components at the time.
Regardless, the single motor drive was what attracted [crabfu], ten entire years ago, to attach a steam engine to the device. A quick cut through the side of the case, a tiny chain drive, and a Jensen steam engine was all it took to get the toy converted over. Potato quality video after the break.
Filed under: classic hacks, robots hacks
We all remember the video games of our youth fondly, and many of us want to relive those memories and play those games again. When we get this urge, we usually turn first to emulators and ROMs. But, old console and computer games relied heavily on the system’s hardware to control the actual gameplay. Most retro consoles, like the SNES for example, rely on the hardware clock speed to control gameplay speed. This is why you’ll often experience games played on emulators as if someone is holding down the fast forward button.
The solution, of course, is to play the games on their original systems when you want a 100% accurate experience. This is what led [Chris Osborn] back to gameplay on an Apple II. However, he quickly discovered that approach had challenges of its own – specifically when it came to the joystick.
The Apple II joystick used a somewhat odd analog potentiometer design – the idea being that when you pushed the joystick far enough, it’d register as a move (probably with an eye towards smooth position-sensitive gameplay in the future). This joystick was tricky, the potentiometers needed to be adjusted, and sometimes your gameplay would be ruined when you randomly turned and ran into a pit in Lode Runner.
The solution [Chris] came up with was to connect a modern USB gamepad to a Raspberry Pi, and then set it to output the necessary signals to the Apple II. This allowed him to tune the output until the Apple II was responding to gameplay inputs consistently. With erratic nature of the original joystick eliminated, he could play games all day without risk of sudden unrequested jumps into pits.
The Apple II joystick is a weird beast, and unlike anything else of the era. This means there’s no Apple II equivalent of plugging a Sega controller into an Atari, or vice versa. If you want to play games on an Apple II the right way, you either need to find an (expensive) original Apple joystick, or build your own from scratch. [Chris] is still working on finalizing his design, but you can follow the gits for the most recent version.
Filed under: classic hacks, Raspberry Pi
If you have an interest in audio there are plenty of opportunities for home construction of hi-fi equipment. You can make yourself an amplifier which will be as good as any available commercially, and plenty of the sources you might plug into it can also come into being on your bench.
There will always be some pieces of hi-fi equipment which while not impossible to make will be very difficult for you to replicate yourself. Either their complexity will render construction too difficult as might be the case with for example a CD player, or as with a moving-coil loudspeaker the quality you could reasonably achieve would struggle match that of the commercial equivalent. It never ceases to astound us what our community of hackers and makers can achieve, but the resources, economies of scale, and engineering expertise available to a large hi-fi manufacturer load the dice in their favour in those cases.
The subject of this article is a piece of extreme high-end esoteric hi-fi that you can replicate yourself, indeed you start on a level playing field with the manufacturers because the engineering challenges involved are the same for them as they are for you. Electrostatic loudspeakers work by the attraction and repulsion of a thin conductive film in an electric field rather than the magnetic attraction and repulsion you’ll find in a moving-coil loudspeaker, and the resulting very low mass driver should be free of undesirable resonances and capable of a significantly lower distortion and flatter frequency response than its magnetic sibling.
If you’ve ever felt your hair lift as you take off a synthetic-fibre jumper, you will be familiar with the phenomenon of electrostatic force. Coulomb’s Law in action, the force resulting from a significant difference in the accumulated electrostatic charge between garment and wearer. Harnessing this force in a speaker would require a charge that varies at audio frequency creating the force on a sufficient surface area to create sound waves, by proximity with a conductive electrode maintained at a constant opposite charge.
A practical electrostatic loudspeaker. By Akilaa (Own work) [Public domain], via Wikimedia Commons.A practical electrostatic speaker uses a flexible film that is as thin and light as possible as its moving component. A partially conductive coating is applied to it — just conductive enough to allow it to accumulate charge, but not enough to allow that charge to flow away freely. Two acoustically transparent conductive electrodes are suspended next to the film, one on either side. These electrodes can be wire mesh, an array of parallel wires, or a perforated metal sheet.
The film is held at a very high constant static charge by the application of a multi-kilovolt DC supply, while the electrodes on either side of it are supplied with very high voltage audio frequency in an antiphase push-pull configuration. This push-pull feed on either side of the film ensures that the forces on it are the same when it moves to either side of its center position. If there was a single electrode on one side of the film it would introduce distortion because the half of the cycle when the film was furthest from the electrode would receive less force than the closest.
In the majority of electrostatic speaker designs, the high voltage audio is created using a step-up transformer from a conventional audio amplifier designed for moving-coil speakers. It is possible to create an audio amplifier with an output in the multi-kilovolts, but the designers of such amplifiers face challenges from the high voltage itself and from the availability of suitable devices capable of delivering low-distortion audio at those voltages.The Drawbacks
A Martin Logan electrostatic speaker on the left next to a moving-coil speaker. By Bownose on Flickr [CC BY 2.0 ], via Wikimedia Commons.You might think that with low distortion and amazingly flat frequency response there would be no stopping the electrostatic speaker. But as with so many things there are weaknesses to these devices. They work in both directions, for a start. Half the sound the produce comes from the front, while the other half goes off somewhere behind the speaker where you can’t benefit from it. You can’t box them in with an infinite baffle or a ported enclosure as you can with a moving-coil speaker, and if you try to reflect too much of the backward-facing sound in your direction you’ll soon be face to face with its being out of phase with the sound coming from the front.
Once you’ve reconciled yourself with your speaker’s propensity for sending sound backwards, you then have another problem to contend with. Electrostatic speakers are very directional, so while you can set them up for a perfect stereo image in one spot, a relatively small movement can take you out of it. Various attempts have been made to broaden their spread with differing degrees of success, some manufacturers produce curved panels while others attempt to supply the grid of electrodes from a series of delay lines to create the effect.
Finally in the litany of electrostatic speaker woes, these speakers are not so good when it comes to bass reproduction. Electrostatic speakers will therefore often come with a moving-coil bass unit as a companion to the electrostatic panels, and will usually incorporate a separate bass amplifier and active crossover. Because the radiative patterns of cone speakers and the electrostatics are different, the bass-to-treble balance will vary as you move around the room, further accentuating the sweet-spot issues.Build Them
Of course, general information about electrostatic speakers is all very well, but this is a site for hardware hackers. You want to see real examples, and maybe have a go yourself. In which case we won’t disappoint you, with electrostatic speaker builds from [Mark Rehorst], [Jazzman], and [Ken Siebert].
The materials required for home electrostatic speaker construction are fairly straightforward, with maybe one exception: the plastic film that forms the moving part. [Mark Rehorst] has an exhaustive list, and makes this important point: “Buying stuff specifically for ESLs is like buying parts for Ferraris. The seller knows you expect to be robbed so they try not to disappoint you“. The world of high-end hi-fi works to different economic rules it seems, so sourcing the same components from suppliers in other industries is the way to go.Ken Siebert’s high voltage bias supply.
The transformer is an example with plenty of scope for stretching your budget. Audio transformers are expensive at the best of times. Happily there is a cheap alternative; most home builders use standard mains transformers connected in reverse. It has to be said though, as soon as you put a transformer in an audio circuit its performance is only as good as that of the transformer, so there may be some compromises in this component.
Any thin flexible plastic film can make a noise in an electrostatic speaker, but for best performance the thinner your film, the better. 5 micron thick Mylar seems to be the preferred choice.
Once you have your film, it needs a slightly conductive coating. It mustn’t be too conductive: charge must accumulate but not flow away too quickly. Some electrostatic speakers will take a few minutes to reach their maximum volume for this reason, as the charge moves very slowly to fill the panel. Older designs used graphite powder rubbed onto the surface of the film, while more recently they use spray-on coatings intended for static protection in the electronics business.
The fixed electrodes can be made from a variety of materials. [Mark] uses perforated aluminium, while [Jazzman] and [Ken] use a grid of wires, and in one case welding rods. A well known commercial design uses a very large perforated printed circuit board with the electrodes etched in a carefully designed pattern to try to broaden the angle of the finished speaker.
The high voltage bias supply is usually generated with a voltage multiplier chain from an AC transformer, though more recent designs may use solid state inverters. Normally about 4 to 6 KV DC is required for this task.
[Jazzman]’s film stretching table.Once these components have been sourced, there only remains any woodwork necessary to make the frames that hold it together, and copper tape and wiring to ensure all contacts are made. Building the speaker is then a fairly straightforward workshop task, with one tricky moment: stretching the film over its wooden frame. It needs to be under slight tension, as any slackness or wrinkles will cause distortion. The technique used by most home builders is to stretch it over a frame with a bicycle inner tube around its edge, and then to inflate the tube which draws the film taut.
If you have got this far in an electrostatic speaker build, you should have something fairly special: there is very little engineering that the commercial manufacturers can do to make it much better. Listening to an electrostatic speaker can be something of a startling experience, but beware of superlatives. The hi-fi industry has a special kind of mumbo-jumbo with a whole vocabulary of pseudoscience to make its adherents feel good about eye-watering price tags, and electrostatic speakers are something they have placed on a special pedestal all of their own. You probably won’t be disappointed with your speaker build, but you’ll know it has the flaws detailed above and you’ll put up with them for the bragging rights.
Filed under: Curated, Featured, musical hacks, Original Art, Skills
[Folkert van Heusden] sent us in his diabolical MIDI device. Ardio is a MIDI synthesizer of sorts, playing up to sixteen channels of square waves, each on its separate Arduino output pin, and mixed down to stereo with a bunch of resistors. It only plays square waves, and they don’t seem to be entirely in tune, but it makes a heck of a racket and makes use of an interesting architecture.
Ardio is made up of three separate el cheapo Arduino Minis, because…why not?! One Arduino handles the incoming MIDI data and sends note requests out to the other modules over I2C. The voice modules receive commands — play this frequency on that pin — and take care of the sound generation.
None of the chips are heavily loaded, and everything seems to run smoothly, despite the amount of data that’s coming in. As evidence, go download [Folkert]’s rendition of Abba’s classic “Chiquitita” in delicious sixteen-voice “harmony”. It’s a fun exercise in using what’s cheap and easy to get something done.
Filed under: musical hacks
An SD card is surely not an enterprise grade storage solution, but single board computers also aren’t just toys anymore. You find them in applications far beyond the educational purpose they have emerged from, and the line between non-critical and critical applications keeps getting blurred.
Laundry notification hacks and arcade machines fail without causing harm. But how about electronic access control, or an automatic pet feeder? Would you rely on the data integrity of a plain micro SD card stuffed into a single board computer to keep your pet fed when you’re on vacation and you back in afterward? After all, SD card corruption is a well-discussed topic in the Raspberry Pi community. What can we do to keep our favorite single board computers from failing at random, and is there a better solution to the problem of storage than a stack of SD cards?Understanding Flash Memory
The Flash Translation Layer
The special properties of Flash memory reach deep down to the silicon, where individual memory cells (floating gates) are grouped in pages (areas that are programmed simultaneously), and pages are grouped in blocks (areas that are erased simultaneously). Because entire blocks have to be erased before new data can be written to them, adding data to an existing block is a complex task: At a given block size (i.e. 16 kB), storing a smaller amount of data (i.e. 1 kB), requires reading the existing block, modifying it in cache, erasing the physical block, and writing back the cached version.
This behavior makes Flash memory (including SSDs, SD-cards, eMMCs, and USB thumb drives) slightly more susceptible to data corruption than other read-write media: There is always a short moment of free fall between erasing a block and restoring its content.
The Flash Translation Layer (FTL) is a legacy interface that maps physical memory blocks to a logical block address space for the file system. Both SSDs and removable Flash media typically contain a dedicated Flash memory controller to take care of this task. Because individual Flash memory blocks wear out with every write cycle, this mapping usually happens dynamically. Referred to as wear-leveling, this technique causes physical memory blocks wander around in the logical address space over time, thus spreading the wear across all available physical blocks. The current map of logical block addresses (LBAs) is stored in a protected region of the Flash memory and updated as necessary. Flash memory controllers in SSDs typically use more effective wear-leveling strategies than SD-cards and therefore live significantly longer. During their regular lifetime, however, they may perform just as reliably.Retroactive Data Corruption Blocks on fire (CC-BY-SA 3.0 by Minecraft Wiki)
A write operation on Flash typically includes caching, erasing and reprogramming previously written data. Therefore, in the case of a write abort, data corruption on Flash memory can retroactively corrupt existing data entirely unrelated to the data being written.
The amount of corrupted data depends on the device-dependent block size, which can vary from 16 kB to up to 3 MB. This is bad, but the risk of encountering retroactive data corruption is also relatively low. After all, it requires a highly unusual event to slice right in between the erasing and the reprogramming cycle of a block. It is mostly ignored outside of data centers, but it is certainly a threat to critical applications that rely on data integrity.Unexpected Power Loss One of these cables is vital. (CC-BY 2.0 by Sparkfun)
The most likely cause of write abort related data corruption are unexpected power losses, and especially Flash memory does not take them very well. Neither consumer grade SSDs nor SD cards are built to maintain data integrity in an environment that is plagued with an unsteady power supply. The more often power losses occur, the higher is the chance of data corruption. Industrial SSDs, preferably found in UPS powered server racks, additionally contain military grade fairy dust impressive banks of Tantalum capacitors (or even batteries), which buy them enough time to flush their large caches to physical memory in case of a power loss.
While laptops, tablets and smartphones don’t particularly have to fear running out of juice before they can safely shut down, SBCs are often left quite vulnerable to power losses. Looking at the wiggly micro USB jack and the absence of a shutdown button on my Pi, the power loss is effectively built in. In conjunction with Flash memory, this is indeed an obstacle in achieving data integrity.The Role Of The File System
File systems provide a file-based structure on top of the logical block address space and also implement mechanisms to detect and repair corruptions of their own. If something goes wrong, a repair program will scan the entire file system and do it’s best to restore its integrity. Additionally, most modern file systems offer journaling, a technique where write operations are logged before they are executed. In the case of a write abort, the journal can be used to either restore the before state or to complete the write operation. This speeds up filesystem repairs and increases the chance that an error can actually be fixed.
Unfortunately, journaling is not free. If every write operation was written to the journal first, the effective write speed would be cut into half while the Flash memory wear would be doubled. Therefore, commonly used file systems like HFS+ and ext4 only keep stripped down journals, mostly covering metadata. It is this practical tradeoff, that makes file systems a particularly bad candidate to stand in for data integrity after a failure in the underlying storage medium. They can restore integrity, but they can also fail. And they can’t restore lost data.
In the age of Flash memory, the role of the file system is changing, and it’s about to absorb the functions of the FTL. The file system JFFS2 is prepared to directly manage raw NAND Flash memory, resulting in more effective wear-leveling techniques and the avoidance of unnecessary write cycles. JFFS2 is commonly used on many OpenWRT devices, but despite its advantages, SBCs that run on Flash media with FTL (SD cards, USB thumb drives, eMMCs) will not benefit from such a file system. It is worth mentioning, that the Beaglebone Black Beagleboard actually features a 512 MB portion of raw accessible NAND flash, which invites for experiments with JFFS2.The Pi Of Steel
To answer the initial question of how to effectively prevent data corruption on single board computers: the physical layer matters. Especially for the use in single board computers, high-quality SD cards happen to perform better and live longer. Employing a larger SD card than the absolute minimum adds an additional margin to make up for suboptimal wear-leveling.The LiFePo4wered/Pi adds a power button and UPS to the Raspberry Pi.
The next step on the way to the Pi Of Steel should deal with unexpected power losses. Adopting a battery-based UPS will reduce them to homeopathic doses, and over at hackaday.io, Patrick Van Oosterwijck has worked out a great UPS solution to keep a Raspberry Pi alive at all times.
For some applications, this may still not be enough and for others, the added cost and weight of a battery pack may not be practical. In those cases, there is actually only one thing you can do: Set the Pi’s root partition to read-only. This practically gives you the power of an SBC and the reliability and longevity of a microcontroller.
Eventually, a single Flash cell in read-write mode can only be so reliable, and just by looking at the facts, I would think twice before employing SD-card based single board computers in certain applications. But what do our readers you think? What’s your strategy to keep your SD cards sane? Let us know in the comments!
Filed under: Hackaday Columns, Raspberry Pi
Brothers [Armand] and [Victor] took their acoustic guitar to the next level, making their own pickups to turn it into an electric guitar. The result is that awesome electric guitar sound.
The pickups are homemade magnetic pickups. Each string has a steel bolt behind it with three ceramic magnets on each bolt. A coil is also wrapped around all the pickups. That coil is what’s connected to the wires going to the amplifier. When a string vibrates, it changes the magnetic field in the pickup which induces a current in the coil and that is then sent on to the amplifier to be altered as desired and turned back into sound. Of course that meant the guys had to replace their nylon strings for steel ones.
With just the volume amplified the sound isn’t very different but when the amplifier’s gain is turned up and the volume turned down the sound is undoubtedly electric. As you can hear in the video below, Johnny B. Goode, Paint it Black and Satisfaction take their acoustic guitar’s sound to a whole new level.
Filed under: musical hacks
Playing around with FPGAs used to be a daunting prospect. You had to fork out a hundred bucks or so for a development kit, sign the Devil’s bargain to get your hands on a toolchain, and only then can you start learning. In the last few years, a number of forces have converged to bring the FPGA experience within the reach of even the cheapest and most principled open-source hacker.
[Ken Boak] and [Alan Wood] put together a no-nonsense FPGA board with the goal of getting the price under $30. They basically took a Lattice iCE40HX4K, an STMF103 ARM Cortex-M3 microcontroller, some SRAM, and put it all together on a single board.
The Lattice part is a natural choice because the IceStorm project created a full open-source toolchain for it. (Watch [Clifford Wolf]’s presentation). The ARM chip is there to load the bitstream into the FPGA on boot up, and also brings USB connectivity, ADC pins, and other peripherals into the mix. There’s enough RAM on board to get a lot done, and between the ARM and FPGA, there’s more GPIO pins than we can count.
Modeling an open processor core? Sure. High-speed digital signal capture? Why not. It even connects to a Raspberry Pi, so you could use the whole affair as a high-speed peripheral. With so much flexibility, there’s very little that you couldn’t do with this thing. The trick is going to be taming the beast. And that’s where you come in.
Filed under: ARM, FPGA
[Aleksejs Mirnijs] needed a tool to accurately measure the power consumption of his Raspberry Pi and Arduino projects, which is an important parameter for dimensioning adequate power supplies and battery packs. Since most SBC projects require a USB hub anyway, he designed a smart, WiFi-enabled 4-port USB hub that is also a power meter – his entry for this year’s Hackaday Prize.
[Aleksejs’s] design is based on the FE1.1s 4-port USB 2.0 hub controller, with two additional ports for charging. Each port features an LT6106 current sensor and a power MOSFET to individually switch devices on and off as required. An Atmega32L monitors the bus voltage and current draw, switches the ports and talks to an ESP8266 module for WiFi connectivity. The supercharged hub also features a display, which lets you read the measured current and power consumption at a glance.
Unlike most cheap hubs out there, [Aleksejs’s] hub has a properly designed power path. If an external power supply is present, an onboard buck converter actively regulates the bus voltage while a power path controller safely disconnects the host’s power line. Although the first prototype is are already up and running, this project is still under heavy development. We’re curious to see the announced updates, which include a 2.2″ touchscreen and a 3D-printable enclosure.The HackadayPrize2016 is Sponsored by:
Filed under: The Hackaday Prize
If you are like us, you’ll read a bit more and smack your forehead. Amazon recently filed a patent. That isn’t really news, per se–they file lots of patents, including ones that cover clicking on a button to order something and taking pictures against white backgrounds (in a very specific way). However, this patent is not only a good idea, but one we were surprised didn’t arise out of the hacker community.
There can’t be an invention without a problem and the problem this one solves is a common one: While wearing noise cancelling headphones, you can’t hear things that you want to hear (like someone coming up behind you). The Amazon solution? Let the headphones monitor for programmable keywords and turn off noise cancellation in response to those words. We wonder if you could have a more sophisticated digital signal processor look for other cues like a car horn, a siren, or a scream.
Filed under: news
Filed under: 3d Printer hacks
Last year, the Federal Communications Commission proposed a rule governing the certification of RF equipment, specifically wireless routers. This proposed rule required router manufacturers to implement security on the radio module inside these routers. Although this rule is fairly limited in scope – the regulation only covers the 5GHz U-NII bands, and only applies to the radio subsystem of a router, the law of unintended consequences reared its ugly head. The simplest way to lock down a radio module is to lock down the entire router, and this is exactly what a few large router manufacturers did. Under this rule, open source, third-party firmwares such as OpenWRT are impossible.
Now, router manufacturer TP-Link has reached an agreement with the FCC to allow third-party firmware. Under the agreement, TP-Link will pay a $200,000 fine for shipping routers that could be configured to run above the permitted power limits.
This agreement is in stark contrast to TP-Link’s earlier policy of shipping routers with signed, locked firmware, in keeping with the FCC’s rule.
This is a huge success for the entire open source movement. Instead of doing the easy thing – locking down a router’s firmware and sending it out the door – TP-Link has chosen to take a hit to their pocketbook. That’s great news for any of the dozens of projects experimenting with mesh networking, amateur radio, or any other wireless networking protocol, and imparts a massive amount of goodwill onto TP-Link.
Thanks [Maave] for the tip.
Filed under: news, wireless hacks
Home automation is a favorite in sci-fi, from Tony Stark’s Jarvis, to Rosie the robotic maid on the Jetsons, and even the sliding doors pulled by a stagehand Star Trek. In fact, most people have a favorite technology that should be just about ready to make an appearance in their own home. So where are these things? We asked you a few weeks ago and the overwhelming answer was that the software just isn’t there yet.
We’re toddling through the smart home years, having been able to buy Internet-connected garage doors and thermostats for some time now. But for the most part all of these systems are islands under one roof. Automation is the topic of the current challenge for the 2016 Hackaday Prize. Developing the glue that can hold all of these pieces together would make a great entry. Why doesn’t that glue yet exist?
I think the problem is really twofold. On the one hand, there isn’t a clear way to make many devices work under one software. Second, there really isn’t an obvious example of great user experience when it comes to home automation. Let’s look at why and talk about what will eventually get us there.Human Interface
Home Automation boils down to adding an automated layer between the people in the house and the human interfaces that control the house. This is actually a pretty hard sell. Do you lights need to be automated? Isn’t it just lazy that you can’t get up and press a light switch that works instantly without need of anything other than reliable mains power?
That’s a tough question. Is your dishwasher a symbol of your laziness? After all, you still need to scrape any leftovers off, load the rack, run the thing, and empty it again. Now that I think of it, automating a dishwasher further would be a great entry too. But my point is that before widespread adoption a lot of people must have thought that needing an automatic dishwasher was lazy but now they’re highly desired. For smart homes to become widespread we need to make the benefits much greater than the pain of the transition.A Tale of Two Switches WeMo WiFi light switch to the left of two “dumb” switches
I happen to have two smarter-than-average light switches in my house. One is an actual Internet of Things Thing — a WeMo Light Switch — the other is a non-connected switch with some fancy features. The WeMo controls my porch light which I want it to turn on at dusk and turn off at 11pm. For six years I used a switch with a programmable display that was a huge pain to set and reset as the length of days and time offset changed. It finally died (which a switch should never do) and I bought this one that has WiFi but the software is horrible and as much of a pain as the old switch. After the stock setup didn’t work I was thankfully able to get reliable service by switching to IFTTT to control it and haven’t touched it since. After that experience I don’t want to.
Earlier this summer I upgraded to LED recessed lighting in my living room. It’s waaaaay too bright and I needed a dimmer. I knew this was going to be an issue so I considered opting for a Wink Hub and the recessed lights and switch that go with it. I ended up with non-radio-controlled (normal) lights and a Lutron Maestro dimmer switch. This thing is awesome! You can easily set its min and max brightness for your lights, but you don’t have to. Use a double-tap for full brightness when you turn it on but it still remembers your dimmer setting. A long-press to shut off gives you about a minute to get out of the room before shutting down.Lutron Mestro dimmer is not dumb but not quite smart
A think the WeMo switch hardware is excellent. But considering the two switches, I love the Lutron and have a bad opinion of the WeMo for no other reason than a bad software setup experience. This is the core of the problem with Home Automation: the user can’t separate a bad software experience from the hardware, and since they pay good money for the hardware they are likely to be turned off to any further automation adventures.Hardware Lock-In
The concept that hardware costs money and software doesn’t is part of the bigger problem. Hardware manufacturers have every incentive to build software that only works with their hardware — they make no money if you use their free app/website/etc. to run another manufacturer’s hardware. This is a pretty tough issue to tackle.
But it does go beyond that. Let’s say a hardware manufacturer were to allow third party hardware to run with their system. If that third party stuff works poorly it may sour the consumer’s opinion of the entire system. Again, this issue doesn’t have a clear solution.
I want to hear what you think about it — is the power of the pocket book (what technologies we buy and don’t buy) the only leverage we have in this situation? What can we do to encourage manufacturers not to lock down their hardware systems to a proprietary ecosystem?We’ve Solved This Before
Look to the PC industry. You can run the same program on a Dell, Acer, Asus, or Toshiba laptop. You can even change the operating system you run on those machines, and for that matter, software companies can make their products work on Macs. It’s because there are standards for defining what hardware is in these computers so that compilers may be built, and standards for how these computers communicate with the outside world (USB, Ethernet, WiFi, etc.). We need this for non-computer computing devices like lights, light switches, refrigerators, security cameras, doorbells, robot maids, and the like.We Need a Software Champion
We need to separate hardware from software so the hardware companies can do what they’re best at — build affordable devices that work reliably year in and year out. I don’t think this can happen until a clear software champion (or group of champions) appears. This means an intuitive interface that your average human can understand, configure, and intuitively operate at the same level you can operate a light switch.
This is a really hard problem. How do you think it should be approached? What is the incentive for someone to build these software tools? Get the conversation started below. As with the last installment I’ll pick out some of the most interesting comments and send out Hackaday t-shirts. From the last discussion we sent shirts out to [aleksclark], [DaveW], [Dax], [fountside], [Ian Lee], [j0z0r], [jan], [maxzillian], [Neil Cherry], and [sangwiss].
Are you into DIY home automation? Now’s a great time to show off some of your work. Enter your project now for the Automation challenge of the 2016 Hackaday Prize. Twenty entries will win $1000 each and go on to the vie for the grand prize of $150,000.The HackadayPrize2016 is Sponsored by:
Filed under: home hacks, The Hackaday Prize
Transistors have come a long way. Like everything else electronic, they’ve become both better and cheaper. According to a recent IEEE article, a transistor cost about $8 in today’s money back in the 1960’s. Consider the Regency TR-1, the first transistor radio from TI and IDEA. In late 1954, the four-transistor device went on sale for $49.95. That doesn’t sound like much until you realize that in 1954, this was equivalent to about $441 (a new car cost about $1,700 and a copy of life magazine cost 20 cents). Even at that price, they sold about 150,000 radios.
Part of the reason the transistors cost so much was that production costs were high. But another reason is that yields were poor. In some cases, 4 out of 5 of the devices were not usable. The transistors were not that good even when they did work. The first transistors were germanium which has high leakage and worse thermal properties than silicon.
Early transistors were subject to damage from soldering, so it was common to use an alligator clip or a specific heat sink clip to prevent heat from reaching the transistor during construction. Some gear even used sockets which also allowed the quick substitution of devices, just like the tubes they replaced.
When the economics of transistors changed, it made a lot of things practical. For example, a common piece of gear used to be a transistor tester, like the Heathkit IT-121 in the video below. If you pulled an $8 part out of a socket, you’d want to test it before you spent more money on a replacement. Of course, if you had a curve tracer, that was even better because you could measure the device parameters which were probably more subject to change than a modern device.
Of course, germanium to silicon is only one improvement made over the years. The FET is a fundamentally different kind of transistor that has many desirable properties and, of course, integrating hundreds or even thousands of transistors on one integrated circuit revolutionized electronics of all types. Transistors got better. Parameters become less variable and yields increased. Maximum frequency rises and power handling capacity increases. Devices just keep getting better. And cheaper.A Brief History of Transistors
The path from vacuum tube to the Regency TR-1 was a twisted one. Everyone knew the disadvantages of tubes: fragile, power hungry, and physically large, although smaller and lower-power tubes would start to appear towards the end of their reign. In 1925 a Canadian physicist patented a FET but failed to publicize it. Beyond that, mass production of semiconductor material was unknown at the time. A German inventor patented a similar device in 1934 that didn’t take off, either.Replica of the First Transistor
Bell labs researchers worked with germanium and actually understood how to make “point contact” transistors and FETs in 1947. However, Bell’s lawyers found the earlier patents and elected to pursue the conventional transistor patent that would lead to the inventors (John Bardeen, Walter Brattain, and William Shockley) winning the Nobel prize in 1956.
Two Germans working for a Westinghouse subsidiary in Paris independently developed a point contact transistor in 1948. It would be 1954 before silicon transistors became practical. The MOSFET didn’t appear until 1959.
Of course, even these major milestones are subject to incremental improvements. The V channel for MOSFETs, for example, opened the door for FETs to be true power devices, able to switch currents required for motors and other high current devices.Transistors Today
It is possible to build a lot of things today without ever handling a discrete transistor. However, a modern integrated circuit can contain enormous numbers of devices. A 1959 PDP-1 had about 2,800 discrete transistors in it. Even an 8088, the original IBM PC CPU, had 29,000 devices–over ten times the PDP-1. Some large chips today have billions of devices onboard.
Small transistors that work well are throw away components today. A quick check of a general purpose transistor price shows under 20 cents per device and if you are willing to buy more than one at a time, that price goes down to about 10 cents each, and can go down to 3 cents or less with any quantity. For about $80, you can get 2,500 transistors which ought to be a lifetime supply for most of us. At that point, testers and sockets make less sense.
By the way, we’ve talked about early transistor radios before. Rufus Turner published plans for a crystal radio with a three-transistor audio amplifier back in 1950. Germanium transistors still find use in certain applications where their low forward voltage and in things like fuzz pedals where there is a subjective difference in sound quality provided by germanium transistors.
Regency TR-1 by [Gregory F. Maxwell] GFDL 1.2
Transistor socket by [ArnoldReinhold] CC-BY-SA-3.0
Filed under: Curated, History, Original Art, Retrotechtacular
Disruption is a basic tenet of the Open Hardware movement. How can my innovative use of technology disrupt your dinosaur of an establishment to make something better? Whether it’s an open-source project chipping away at a monopoly or a commercial start-up upsetting an industry with a precarious business model based on past realities, we’ve become used to upstarts taking the limelight.
As an observer it’s interesting to see how the establishment they are challenging reacts to the upstart. Sometimes the fragility of the challenged model is such that they collapse, other times they turn to the courts and go after the competitor or even worse, the customers, in an effort to stave off the inevitable. Just occasionally though they embrace the challengers and try to capture some of what makes them special, and it is one of these cases that is today’s subject.
A famously closed monopoly is the world of academic journals. A long-established industry with a very lucrative business model hatched in the days when its product was exclusively paper-based, this industry has come under some pressure in recent years from the unfettered publishing potential of the Internet, demands for open access to public-funded research, and the increasing influence of the open-source world in science.
Elsevier, one of the larger academic publishers, has responded to this last facet with HardwareX, a publication which describes itself as “an open access journal established to promote free and open source designing, building and customizing of scientific infrastructure“. In short: a lot of hardware built for scientific research is now being created under open-source models, and this is their response.
Some readers might respond to this with suspicion, after all the open-source world has seen enough attempts by big business to embrace its work and extend it into the proprietary, but the reality is that this is an interesting opportunity for all sides. The open access and requirement for all submissions to be covered under an open hardware licence mean that it would be impossible for this journal to retreat behind any paywalls. In addition the fact of it being published in a reputable academic journal will bring open-source scientific hardware to a new prominence as it is cited in papers appearing in other journals. Finally the existence of such a journal will encourage the adoption of open-source hardware in the world of science, as projects are released under open-source licences to fulfill the requirements for submission.
So have the publishing dinosaurs got it right, and is this journal an exciting new opportunity for all concerned? We think it has that potential, and the results won’t be confined to laboratories. Inevitably the world of hackers and makers will benefit from open-source work coming from scientists, and vice versa.
Thanks [Matheus Carvalho] for the tip.
Bookbinding workshop image: By Nasjonalbiblioteket from Norway [No restrictions], via Wikimedia Commons.
Filed under: hardware, news