We are going to great lengths to turn a quick idea into an electronic prototype, be it PCB milling, home etching or manufacturing services that ship PCBs around the world. Unwilling to accept the complications of PCB fabrication, computer science student [Varun Perumal Chadalavada] came up with an express solution for PCB prototyping: Printem – a Polaroid-like film for instant-PCBs.
Printem is a photosensitive multi-layer assembly, similar to presensitized copper clad – but with an instant development feature. It consists of a thin conductive copper foil that is held to a transparent carrier substrate by a photocurable adhesive film. The other side of the copper features a layer of holding adhesive and a peel-off back side.
To turn the Printem film into a PCB, a negative of the copper traces is printed onto the transparent substrate with help of a regular inkjet or laser printer (a). The film is then exposed to UV light (b). Where light shines through the printed mask, the photo-adhesive cures and selectively fuses the copper film to the substrate. After exposure, the back-side with the holding adhesive is peeled off (c), taking the un-fused copper-portion with it.
The copper layer is very thin, about 100 times thinner than on regular PCBs, and breaks clean enough around the contours of the exposed regions to form copper traces. The result is a flexible PCB (d) that, depending on the substrate material (acetate or polyimide), can even be soldered at low temperatures. For those who want to learn more about how Printem works, [Varun] has put together an interesting writeup on Hackaday.io.
Printem is a project at the DGP (Dynamic Graphics Project) lab of the University Of Toronto and has been recognized by the University as one of the Inventions of the Year. As co-founder of Printem, Ph.D. Student and busy inventor [Varun] now works on the commercialization of his instant PCBs with support of the University’s accelerator network. Enjoy the explanatory video below and let us know what you think of this in the comments!The HackadayPrize2016 is Sponsored by:
Filed under: The Hackaday Prize, tool hacks
The HTC Vive is a virtual reality system designed to work with Steam VR. The system seeks to go beyond just a headset in order to make an entire room a virtual reality environment by using two base stations that track the headset and controller in space. The hardware is very exciting because of the potential to expand gaming and other VR experiences, but it’s already showing significant potential for hackers as well — in this case with robotics location and navigation.
Autonomous robots generally utilize one of two basic approaches for locating themselves: onboard sensors and mapping to see the world around it (like how you’d get your bearings while hiking), or sensors in the room which tell the robot where it is (similar to your GPS telling you where you are in the city). Each method has its strengths and weaknesses, of course. Onboard sensors are traditionally expensive if you need very accurate position data, and GPS location data is far too inaccurate to be of use on a smaller scale than city streets.
[Limor] immediately saw the potential in the HTC Vive to solve this problem, at least for indoor applications. Using the Vive Lighthouse base stations, he’s able to locate the system’s controller in 3D space to within 0.3mm. He’s then able to use this data on a Linux system and integrate it into ROS (Robot Operating System). [Limor] hasn’t yet built a robot to utilize this approach, but the significant cost savings ($800 for a complete Vive, but only the Lighthouses and controller are needed) is sure to make this a desirable option for a lot of robot builders. And, as we’ve seen, integrating the Vive hardware with DIY electronics should be entirely possible.
Filed under: robots hacks, Virtual Reality
Cutting out precise shapes requires a steady hand, a laser cutter, or a CNC mill, right? Nope! All you need is PCB design software and a fabrication facility that’ll do the milling for you. That’s the secret sauce in [bobricius]’s very pleasing seven-segment display design.
His Hackaday.io entry doesn’t have much detail beyond the pictures and the board files, but we’re not sure we need that many either. The lowest board in the three-board stack has Charlieplexed LEDs broken out to six control pins. Next up is a custom-routed spacer board — custom routed by the PCB house, that is. And the top board in the stack is another PCB, this one left clear of copper where the light shines out.
We want to see this thing lit up! We’ve played around with using PCB epoxy material as a LED diffuser before ourselves, and it can look really good. The spacers should help even out the illumination within segments, while preventing bleed across them. Next step? A matrix of WS2812s with custom-routed spacers and diffusers. How awesome would that be?
Filed under: led hacks
The combination of time-lapse photography and slow camera panning can be quite hypnotic – think of those cool sunset to nightfall shots where the camera slowly pans across a cityscape with car lights zooming by. [Frank Howarth] wanted to replicate such shots in his shop, and came up with this orbiting overhead time-lapse rig for his GoPro.
[Frank] clearly cares about the photography in his videos. Everything is well lit, he uses wide-open apertures for shallow depth of field shots, and the editing and post-production effects are top notch. So a good quality build was in order for this rig, which as the video below shows, will be used for overhead shots during long sessions at the lathe and other machines. The gears for this build were designed with [Matthias Wandel]’s gear template app and cut from birch plywood with a CNC router. Two large gears and two small pinions gear down the motor enough for a slow, smooth orbit. The GoPro is mounted on a long boom and pointed in and down; the resulting shots are smooth and professional looking, with the money shot being that last look at [Frank]’s dream shop.
If you haven’t seen [Frank]’s YouTube channel, you might want to check it out. While his material of choice is dead tree carcasses, his approach to projects and the machines and techniques he employs are great stuff. We featured his bamboo Death Star recently, and if you check out his CNC router build, you’ll see [Frank] is far from a one-trick pony.
Filed under: video hacks
DIY medical science is fun stuff. One can ferret out many of the electrical signals that make the body run with surprisingly accessible components and simple builds. While the medical community predictably dwells on the healthcare uses of such information, the hacker is free to do whatever he or she wants.
A good first start is to look at the relatively strong electrical signals coming off of the heart and other muscles. [Bernd Porr] has put together a simple bioamplifier circuit, and his students have made a series of videos explaining its use that’s well worth your time if you are interested in these things.
The electrically inclined among you are likely to want to start with the “from design to measurement” playlist, which details the construction of the amplifier itself. But the real goodies are hidden in the “EEG essentials: how to measure it and its artefacts” list; getting the signals is only the first step — interpreting them is where it gets interesting. For instance, a lot of what are sold as “mind control” devices these days is much more likely to be simply muscle-controlled, and this video shows you why: small signals buried under bigger ones. (Embedded below).
We’re no strangers to the tricks you can play with biosignals. The MobilECG project folks gave a great talk at Hackaday’s Belgrade 2015 conference, and made this sweet ECG business card as a demo. OpenHardwareExG is a more-sophisticated version of the bioamplifier presented here. And straying from the heart, we’ve seen a slew of “mind-controlled” applications.
But the point of the original post here is that making a bioamp need not be bank-breaking or brain-taxing. It’s the kind of thing that you can do simply on a weekend if you’ve already got the parts. What would you control with your body’s own electrical signals?
Thanks [nic] for the great tip!
Filed under: Medical hacks
You would think that there’s nothing to know about RGB LEDs: just buy a (strip of) WS2812s with integrated 24-bit RGB drivers and start shuffling in your data. If you just want to make some shinies, and you don’t care about any sort of accurate color reproduction or consistent brightness, you’re all set.
But if you want to display video, encode data in colors, or just make some pretty art, you might want to think a little bit harder about those RGB values that you’re pushing down the wires. Any LED responds (almost) linearly to pulse-width modulation (PWM), putting out twice as much light when it’s on for twice as long, but the human eye is dramatically nonlinear. You might already know this from the one-LED case, but are you doing it right when you combine red, green, and blue?
It turns out that even getting a color-fade “right” is very tricky. Surprisingly, there’s been new science done on color perception in the last twenty years, even though both eyes and colors have been around approximately forever. In this shorty, I’ll work through just enough to get things 95% right: making yellows, magentas, and cyans about as bright as reds, greens, and blues. In the end, I’ll provide pointers to getting the last 5% right if you really want to geek out. If you’re ready to take your RGB blinkies to the next level, read on!Gamma
If you’ve ever dimmed a single LED using pulse-width modulation (PWM) before, you have certainly noticed that the response is non-linear. If you ramp up the duty cycle from 0% to 100%, it looks like the LED gets brighter very quickly in the beginning and then somewhere around the 50% mark stops getting brighter at all. On a WS2812, with its eight-bit-per-color resolution, stepping from a red value of 5 to a red value of 10 more than doubles the apparent brightness, while stepping from 250 to 255 can barely be noticed at all.
It’s not the LED or the PWM controlling it that’s to blame, however. It’s your eyes.. We perceive brightness using some kind of power law: if B is perceived brightness and L is the luminance — the amount of physical light that’s getting through your irises — the relationship looks roughly something like this:
That exponential relationship, requiring more and more additional light to create a perceptible difference in brightness, is characterized by that Greek exponent: gamma. For your intuition, gamma values from just around 1.5 to around 3 are probably reasonable to consider. Arbitrarily picking gamma to be 2 makes that fractional gamma exponent into a more comfortable square root and usually isn’t too far wrong. 2.2 is a standard value for CRT monitors in the PC world, and 1.8 used to be the standard for Macs.
But if you really care about the way your LEDs look, you’ll want to tweak the gamma to your particular conditions. I like to think of choosing a gamma in terms of black-and-white photography. If we gamma-correct with a value that’s bigger than your eye’s natural gamma an image will look too contrasty — there will be jumps in the brightness where you’d want it to be smooth. If the gamma is set lower than your eye’s gamma, differences will be muted, and it will look muddy. Get it just right, and you get a smooth transition from dark to light across the full range.
Taking the 2.314’th root of a given number is a tall task to ask of a microcontroller, though, and it’s probably overkill. In the end, I usually implement the gamma correction as a lookup table that turns the desired brightness directly into whatever numbers the chip’s PWM routine wants, so there’s no math left to do at all at runtime. Here’s a quick and dirty Python script that will generate the lookup table for you.Now in Color
Gamma correction can make your single-color LED effects look a lot better. But what happens when you step up from monochrome to RGB color? Imagine that you’ve gone through the whole gamma experiment above with just the red channel of a WS2812 LED. Now you add the green and blue LEDs to the mix. How much brighter does it seem? If you weren’t paying attention above (yawn, math!) you’d say three times brighter. The right answer is the gamma’th root of three.
Strictly speaking, computing brightness depends on the mix of light coming out of all three LEDs. The good news is that you can also figure out the brightness of any arbitrary color combination with gammas. Here’s the formula:
Given any ratio of red to green to blue, you can use this formula to work out the PWM values for each LED that you need to brighten or dim the overall color in equally-sized steps.Cross-Fading
The other use of the brightness formula above is in fading from one color to another, keeping the perceived brightness constant. For instance, to fade from red to blue naïvely, you might start at (255,0,0) and head over toward (0,0,255) by subtracting some red and adding the same amount of blue. Plugging those values into the brightness formula, the result appears significantly dimmer in the middle: down to about 70% of the brightness of the pure colors. Unfortunately, this is the way that nearly everyone online tells you to do it. That doesn’t make it right. (Or maybe they just don’t care about brightness?)
A great way to figure out the gamma that you’d like for RGB LEDs is to set up a color fade and adjust the gamma until there is apparently uniform brightness across the strip. In fact, you can do this with just three LEDs. To make the effect most dramatic, it helps to start with medium brightness on either end of the fade: I’ll use (70,0,0) and (0,70,0) for instance. The middle LED should be some kind of yellow with equal parts of red and green. Tweak the amounts of these values until you think that all three LEDs are about the same brightness, and you can solve for your personal gamma.Color Palettes and Lookup Tables
On a slow microcontroller, or on one that should be doing more important things with its CPU time than computing colors, constantly adjusting color values for brightness is a no-go. In the single-LED case, a lookup table worked well. But in RGB space, a three-dimensional array is needed. For a small number of colors, this can still be workable: five levels of red, blue, and green produces a palette with only 125 (53) entries. If you’ve got flash memory to spare, you can extend this as far as you’d like.
An alternative workaround is to gamma-adjust the individual channels first. This gets the brightness right, but it also affects the rate at which the hue changes across the cross-fade. You might like this effect or you might not — the best is to experiment. It’s certainly simple.Color Sensitivity and Other Details
For me, getting control of the brightness of a color LED is about 95% of the battle. The remaining 5% is in getting precise control of the hue. That said, there are two quirks of the human visual system that matter for the hues.
The situation with the cross-fade of colors is actually more complicated than I’ve made them out to be; the eye isn’t uniformly sensitive to each wavelength of light. If you mixed together 10 lumens of red, 10 lumens of green, and 10 lumens of blue, the result would look overwhelmingly blue. The good news is that this effect is so strong that monitor and RGB LED manufacturers pre-weight the amount of light coming out of each LED for you.
So when you assign a value of (10%, 10%, 10%) to an RGB LED, each of the red, green, and blue LEDs are on for 10% of the time, but the green LED is about three times brighter than the red, and ten times brighter than the blue. The LEDs used take care of the (rough) color-balancing for you, so at least that’s one thing that you don’t have to worry about.Perceptual Uniformity of Hue
If you’re trying to encode numerical values in colors, however, there’s one last quirk of the human perceptual system that you might want to be aware of. We are more sensitive to differences in some colors than in others. In particular, hues around the yellow and cyan regions are really easy for us to distinguish, while different shades of reds and blues are much more difficult. Getting this right is non-trivial, not least because our perception of one color depends on the colors that it’s surrounded by. (Remember the “white and gold” dress?)
Anyway, here’s a library that does pretty darn well at addressing the perceptual uniformity of hues issue, given they’re constrained to using piecewise linear functions. They sacrifice some degree of uniform brightness to get there, though.
If you just need a few colors along a perceptually uniform color gradient, Color Brewer has your back. Python’s matplotlib is going to change its default color scale to one with significantly increased perceptual uniformity and constant brightness, and this video explaining why and how has a great overview of the subject. It’s not simple, but at least they’re getting it right.
Finally, if you’d really like to dive into color theory, this series has much more detail than you’re ever likely to need to know.Conclusion
You can get lost in colors fairly easily, and it’s fun and rewarding to geek out a bit. On the other hand, you can make your LED blinky toys look a lot better just by getting the brightness right, and you do that by figuring out the appropriate gamma for your situation and applying a little math. The “right” gamma is a matter of trial and error, but something around two should work OK for starters. Give it a shot and let me know what you think in the comments. Or better yet, use RGB-gamma-correction in your next project and show us all.
Filed under: Engineering, Hackaday Columns, how-to, led hacks
Is your business card flashy? Is it useful in a pinch? Do they cost $32 each and come with an ePaper display? No? Well, then feast your eyes on this over-the-top business card with an ePaper display by [Paul Schow]. Looking to keep busy and challenge himself with a low-power circuit in a small package, he set about making a business card that can be updated every couple of months instead of buying a new stack whenever he updated his information.
Having worked with ePaper before, it seemed to be the go-to option for [Schow] in fulfilling the ultra-low power criteria of his project — eventually deciding on a 2″ display. Also looking to execute this project at speed, he designed the board in KiCad over a few hours after cutting it down to simply the power control, the 40-pin connector and a handful of resistors and capacitors. In this case, haste made waste in the shape of the incorrect orientation of the 40-pin connector and a few other mistakes besides. Version 2.0, however, came together as a perfect proof-of-concept, while 3.0 looks sleek and professional.
[Schow] advises the assistance of a magnifying glass or microscope in soldering to such a small board — lucky for him there was one available at the nearby TinkerMill – The Longmont Makerspace. He also ran into trouble with the display dimming due to lack of power, but solved it by adding a TLV61225DCKR boost converter. A light on the back flashes when the image is being changed for added effect. While this is a singular and ‘usefulness optional’ business card, don’t hand too many of these out if you care for your wallet.
Filed under: misc hacks
When the story of an invention is repeated as Received Opinion for the younger generation it is so often presented as a single one-off event, with a named inventor. Before the event there was no invention, then as if by magic it was there. That apple falling on Isaac Newton’s head, or Archimedes overflowing his bath, you’ve heard the stories. The inventor’s name will sometimes differ depending on which country you are in when you hear the story, which provides an insight into the flaws in the simple invention tales. The truth is in so many cases an invention does not have a single Eureka moment, instead the named inventor builds on the work of so many others who have gone before and is the lucky engineer or scientist whose ideas result in the magic breakthrough before anyone else’s.
The history of computing is no exception, with many steps along the path that has given us the devices we rely on for so much today. Blaise Pascal’s 17th century French mechanical calculator, Charles Babbage and Ada, Countess Lovelace’s work in 19th century Britain, Herman Hollerith’s American tabulators at the end of that century, or Konrad Zuse’s work in prewar Germany represent just a few of them.
So if we are to search for an inventor in this field we have to be a little more specific than “Who invented the first computer?”, because there are so many candidates. If we restrict the question to “Who invented the first programmable electronic digital computer?” we have a much simpler answer, because we have ample evidence of the machine in question. The Received Opinion answer is therefore “The first programmable electronic digital computer was Colossus, invented at Bletchley Park in World War Two by Alan Turing to break the Nazi Enigma codes, and it was kept secret until the 1970s”.
It’s such a temptingly perfect soundbite laden with pluck and derring-do that could so easily be taken from a 1950s Eagle comic, isn’t it. Unfortunately it contains such significant untruths as to be rendered useless. Colossus is the computer you are looking for, it was developed in World War Two and kept secret for many years afterwards, but the rest of the Received Opinion answer is false. It wasn’t invented at Bletchley, its job was not the Enigma work, and most surprisingly Alan Turing’s direct involvement was only peripheral. The real story is much more interesting.To Bletchley, Where Miracles Happen General Heinz Guderian overlooking the operation of an Enigma machine during the Battle of France. Bundesarchiv, Bild 101I-769-0229-11A / Borchert, Erich (Eric) / CC-BY-SA 3.0, via Wikimedia Commons
At this point we’re going to take you to Bletchley, to the modern-day Bletchley Park site and the National Museum Of Computing which occupies one corner of it. The museum has a fascinating collection, of which two galleries are of interest to us here. The first is their Tunny gallery, which explains the context and sequence of events which led to Colossus, and the second is their Colossus gallery, which contains their fully functional replica of a MkII Colossus computer.
The most famous Nazi encoding system is the Enigma, with its portable machines resembling typewriters becoming a ubiquitous symbol of the codebreaking efforts. This was the code of German military combat units in slightly different forms by all services, and photographs show them being operated from forward positions or in mobile signals units.
Enigma was not however the only German encoding system in use and intercepted by the Allies, and by no means the only one on which the staff at Bletchley Park were employed. The first section of the museum’s Tunny gallery explains the Lorenz cipher, which was used for secure communication at a much higher level between German high command outposts, encoding teleprinter traffic in real time. It had a superficial resemblance to Enigma in that it employed a rotor system, but instead of Enigma’s system of through-wired contacts its rotors produced a pseudo-random binary sequence that was XORed with the binary teleprinter traffic to produce an encrypted output. Also unlike Enigma the codebreakers did not have the benefit of a captured machine to study until very near the end of the war, so their only means to understand it came from intercepted messages using it.The Fish That Shortened The War The museum’s Tunny machine.
The museum takes the visitor through the listening stations and how the frequency-shift-keyed teleprinter traffic was recorded on paper tape and hand-transcribed, before transporting them to the cryptoanalysts at Bletchley Park and their efforts to glean the workings of the system. The breakthrough came as a stroke of luck in August 1941 when an operator in Athens sent the same 4000 character message twice with the same settings on his Lorenz machine, providing the reduced odds of decryption that the Bletchley staff needed to eventually decode it. Using these two ciphertexts and the mechanics of their decoding the mathematician Bill Tutte was then given the task of deducing the operation of the machine, which by early 1942 he had completed. The resultant work was given the codename “Tunny”, after the codename for the Athens communication circuit which had provided the breakthrough. All such links took their codenames from types of fish.
At the centre of the museum’s Tunny gallery is their rebuilt Tunny machine, a British electromechanical reproduction of a Lorenz cipher machine produced by the Post Office Telephone research facility at Dollis Hill, London. It could be set with a plugboard equivalent of the Lorenz’s rotor settings and decode messages, but it still required those rotor settings to be available. The effort to automate the discovery of some of those rotor settings resulted by mid-1943 in a machine called the “Heath Robinson”, after the British cartoonist who drew intricate and complex machines performing simple tasks. If you haven’t heard of him but you are aware of Rube Goldberg, you’re on the right track.The rebuilt Heath Robinson machine.
The museum have recreated a Heath Robinson next to their Tunny, and like the original it keeps a pair of long punched paper tape loops under tension with a system of pulleys. One holds a ciphertext while the other has a sequence of possible settings for one set of rotors, and a set of logic derived from the Tunny machine can be fed the ciphertext automatically along with each of the rotor settings in turn. The resulting output was then used to produce collections of rotor settings that could dramatically shorten the odds for the teams of cryptoanalysts.
Once the visitor has been shown both machines in operation, the guide shows a section of tape that has been mangled by the Heath Robinson’s mechanism, and it is explained that the machines were slow and unreliable. In particular a close synchronisation between the two tapes was essential to its operation, something they could easily lose. He then tells the story of how Alan Turing had recommended the engineer with whom he had previously collaborated on the Enigma work to the Heath Robinson team, and that this was Turing’s only direct contribution to Colossus.
Tommy Flowers was Head of the Switching Group at Dollis Hill, and it was his ideas on how the Heath Robinson’s paper tape sequences of Tunny rotor settings could be generated electronically using thyratrons that would result in the machine that became Colossus. The cypher text was still read from a punched tape, but it was fed into a programmable function could be performed electronically upon it against the thyratron-generated rotor settings. It was not yet a general purpose stored-program computer as we would know it today, but if fulfilled the description of being a programmable all-electronic digital computer.In The Presence Of Greatness The front of the museum’s Colossus MkII.
Walking into the museum’s Colossus gallery as one of the first groups of weekend visitors, we were lucky enough to see it being brought into life. Their Colossus is a replica of a MkII machine completed in 2007, and it stands alone in the centre of the room with the only intrusion a set of discreetly placed safety barriers to keep the public away from high voltages. There are two long parallel racks that would be close to ceiling height if they were not in a wartime hut without a flat ceiling, both studded with the thousands of octal tubes. At the far end is a paper tape reader similar to that of the Heath Robinson, close to the middle are the plugboards and switches through which the machine is programmed, and at the end closest to you is the teleprinter which records the result.
The machine is powered up slowly to reduce thermal shock and prolong the life of its tubes. Our guide told us it only needs a single digit number of tubes replacing in a typical year, which is impressive considering how many it has. Once it gets under way the slight morning chill of the room is replaced by a significant heat from all those tube filaments, and though the machine is quieter than you might expect there is a whir and cyclic clicking sound from the tape reader.
Standing in the same room as the seminal machine of your art is an interesting experience for an engineer, even when it is a replica. Some of the other visitors seemed to be there because of its association with The War rather than because of its technological significance, but it was interesting to see that we were not the only ones who had evidently wielded a soldering iron or two. It is a moment to reflect on how far we’ve come in over seven decades, to silently praise the memory of the people who built it and — despite Colossus itself being shrouded in secrecy –praise the influence of their work on the machines that followed it.
A short video of a walk round the machine in action is below, from it if you will excuse the mobile phone video quality we hope you can get an impression of its size and complexity.The genuine Lorenz machine at Bletchley, on loan from the Norwegian Armed Forces Museum. The codebreakers did not see one of these machines until the end of the war, which is why we have placed this picture at the bottom of our write-up.
The National Museum Of Computing houses a fascinating collection of vintage and historic computers beyond its work on the wartime machines covered here, and is well worth a visit should you find yourself in the vicinity of Bletchley. It is operated by a charitable trust and relies on its very affordable admission fees and voluntary donations for its continued existence. Put it on your itinerary immediately!
Filed under: classic hacks, computer hacks, Featured, History, slider
To get a SCUBA certification, a prospective diver will need to find a dive shop and take a class. Afterwards, some expensive rental equipment is in order. That is, unless you’re [biketool] who has found a way to build some of his own equipment. If you’re looking for a little bit of excitement on your next dive, this second stage regulator build might be just the thing for you.
It’s worth noting that [biketool] makes it explicitly clear that this shouldn’t be used on any living being just yet. The current test, though, was at 120 PSI using some soda bottles and some scrap bike parts. The OpenSCAD-designed regulator seems to work decently well for something that’s been homemade using some 3D-printed parts and other things available to most tinkerers/makers/hackers. [biketool] also goes over some issues with the regulator leaking and discusses porosity issues inherent in FDM printing but overall this project looks promising. Whether or not you want a pressurized 3D printed vessel that close to your face is rife for debate.
We don’t see a lot of SCUBA-related hacks around here. After all, it’s one thing to power an air horn with SCUBA tanks, but it’s a completely different thing to build something that keeps you from drowning.
Thanks to [dave] for the tip!
Filed under: 3d Printer hacks
Kerbal Space Program will have you hurling little green men into the wastes of outer space, landing expended boosters back on the launchpad, and using resources on the fourth planet from the Sun to bring a crew back home. Kerbal is the greatest space simulator ever created, teaches orbital mechanics better than the Air Force textbook, but it is missing one thing: switches and blinky LEDs.
[SgtNoodle] felt this severe oversight by the creators of Kerbal could be remedied by building his Kerbal Control Panel, which adds physical buttons, switches, and a real 6-axis joystick for roleplaying as an Apollo astronaut.
The star of this build is the custom six-axis joystick, used for translation control when docking, maneuvering, or simply puttering around in space. Four axis joysticks are easy, but to move forward and backward, [SgtNoodle] replaced the shaft of a normal arcade joystick with a carriage bolt, added a washer on one end, and used two limit switches to give this MDF cockpit Z+ and Z- control.
The rest of the build is equally well detailed, with a CNC’d front panel, toggle switches and missile switch covers, with everything connected to an Arduino Mega. This Arduino interfaces the switches to the game with the kRPC mod, which creates a script-driven interface to the game. So, toggling the landing gear switch, for instance, triggers a script which interfaces with KSP to lower your landing gear prior to a nice, safe landing. Or, more likely, a terrifying crash.
Filed under: software hacks, video hacks
Launching a high altitude balloon requires a wide breadth of knowledge. To do it right, you obviously need to know electronics and programming to get temperature, pressure, and GPS data. You’ll have to research which cameras will take good pictures and are easily programmable. It’s cold up there, and that means you need some insulation to keep the batteries warm. If you ever want to find your payload, you’ll also need an amateur radio license.
There’s a lot of work that goes into launching high altitude balloons, and for his Hackaday Prize entry, [Jeremy] designed a simple embedded data recorder capable of flying over 100,000 feet.
This flight data recorder for balloons is based on the ever popular ATMega328, and includes humidity, temperature, pressure, accelerometer, gyroscope, and magnetometer sensors. All of this data is recorded to an SD card. The Real Engineers™ who are wont to criticize design decisions they disagree with might laugh at the use of a 7805 voltage regulator, but in this case it makes a lot of sense. The power wasted by a linear regulator isn’t. It’s turned into heat which keeps the batteries alive a little bit longer.
This balloon data recorder has already flown, and [Jeremy] got some great pictures out of it. It’s a great piece of the puzzle for an exceptionally multidisciplinary project, and a great entry for the Hackaday Prize.The HackadayPrize2016 is Sponsored by:
Filed under: The Hackaday Prize
3 hackers, 16 LEDs, 15 years of development, one goal: A persistence of vision display stick that fits into your pocket. That’s the magicShifter 3000. When waved, the little, 10 cm (4 inches) long handheld device draws stable images in midair using the persistence of vision effect. Now, the project has reached another milestone: production.
The design has evolved since it started with a green LED bargraph around 2002. The current version features 16 APA102 (aka DotStar) RGB LEDs, an ESP-12E WiFi module, an NXP accelerometer/magnetometer, the mandatory Silabs USB interface, as well as a LiPo battery and charger with an impressive portion of power management. An Arduino-friendly firmware implements image stabilization as well as a React-based web interface for uploading and drawing images.
After experimenting with Seeedstudio for their previous prototypes, the team manufactured 500 units in Bulgaria. Their project took them on a roundtrip through hardware manufacturing. From ironing out minuscule flaws for a rock-solid design, over building test rigs and writing test procedures, to yield management. All magicShifter enclosures are — traditionally — 3D printed, so [Overflo] and [Martin] are working in shifts to start the 500 prints, which take about 50 minutes each to complete. The printers are still buzzing, but assembled units can be obtained in their shop.
Over all the years, the magicShifter has earned fame and funding as the over-engineered open hardware pocket POV stick. If you’re living in Europe, chances are that you either already saw one of the numerous prototype units or ran into [Phillip Tiefenbacher] aka [wizard23] on a random hacker event to be given a brief demo of the magicShifter. The project always documented the status quo of hardware hacking: Every year, it got a bit smaller, better, and reflected what parts happened to be en vogue.
The firmware and 3D-printable enclosure are still open source and the schematics for the latest design can be found on GitHub. Although, you will search in vain for layout or Gerber files. The risk of manufacturing large batches and then being put out of business by cheap clones put its mark on the project, letting the magicShifter reflect the current, globalized status of hardware hacking once more. Nevertheless, we’re glad the bedrock of POV projects still persists. Check out the catchy explanatory video below.
Filed under: led hacks
A common complaint in the comments of many a Hackaday project is: Why did they use a microcontroller? It’s easy to Monday morning quarterback someone else’s design, but it’s rare to see the OP come back and actually prove that a microcontroller was the best choice. So when [GreatScott] rebuilt his recent DIY coil gun with discrete logic, we just had to get the word out.
You’ll recall from the original build that [GreatScott] was not attempting to build a brick-wall blasting electromagnetic rifle. His build was more about exploring the concepts and working up a viable control mechanism for a small coil gun, and as such he chose an Arduino to rapidly prototype his control circuit. But when taken to task for that design choice, he rose to the challenge and designed a controller using discrete NAND and NOR gates, some RS latches, and a couple of comparators. The basic control circuit was simple, but too simple for safety — a projectile stuck in the barrel could leave a coil energized indefinitely, leading to damage. What took a line of code in the Arduino sketch to fix required an additional comparator stage and an RC network to build a timer to deenergize the coil automatically. In the end the breadboarded circuit did the job, but implementing it would have required twice the space of the Arduino while offering none of the flexibility.
Not every project deserves an Arduino, and sometimes it’s pretty clear the builder either took the easy way out or was using the only trick in his or her book. Hats off to [GreatScott] for not only having the guts to justify his design, but also proving that he has the discrete logic chops to pull it off.
Filed under: Arduino Hacks, weapons hacks
If you entered the world of professional computing sometime in the 1960s or 1970s there is a high probability that you would have found yourself working on a minicomputer. These were a class of computer smaller than the colossal mainframes of the day, with a price tag that put them within the range of medium-sized companies and institutions rather than large corporations or government-funded entities. Physically they were not small machines, but compared to the mainframes they did not require a special building to house them, or a high-power electrical supply.A PDP-11 at The National Museum Of Computing, Bletchley, UK.
One of the most prominent among the suppliers of minicomputers was Digital Equipment Corporation, otherwise known as DEC. Their PDP line of machines dominated the market, and can be found in the ancestry of many of the things we take for granted today. The first UNIX development in 1969 for instance was performed on a DEC PDP-7.
DEC’s flagship product line of the 1970s was the 16-bit PDP-11 series, launched in 1970 and continuing in production until sometime in the late 1990s. Huge numbers of these machines were sold, and it is likely that nearly all adults reading this have at some time or other encountered one at work even if we are unaware that the supermarket till receipt, invoice, or doctor’s appointment slip in our hand was processed on it.
During that over-20-year lifespan of course DEC did not retain the 74 logic based architecture of the earliest model. Successive PDP-11 generations featured ever greater integration of their processor, culminating by the 1980s in the J-11, a CMOS microprocessor implementation of a PDP-11/70. This took the form of two integrated circuits mounted on a large 60-pin DIP ceramic wafer. It was one of these devices that came the way of [bhilpert], and instead of retaining it as a curio he decided to see if he could make it work.
The PDP-11 processors had a useful feature: a debugging console built into their hardware. This means that it should be a relatively simple task to bring up a PDP-11 processor like the J-11 without providing the rest of the PDP-11 to support it, and it was this task that he set about performing. Providing a 6402 UART at the address expected of the console with a bit of 74 glue logic, a bit more 74 for an address latch, and a couple of 6264 8K by 8 RAM chips gave him a very simple but functional PDP-11 on a breadboard. He found it would run with a clock speed as high as 11MHz, but baulked at a 14MHz crystal. He suggests that the breadboard layout may be responsible for this. Hand-keying a couple of test programs, he was able to demonstrate it working.
We’ve seen a lot of the PDP-11 on these pages over the years. Of note are a restoration of a PDP-11/04, this faithful reproduction of a PDP-11 panel emulated with the help of a Raspberry Pi, and an entire PDP-11 emulated on an AVR microcontroller. We have indeed come a long way.
Thanks [BigEd] for the tip.
Filed under: classic hacks, computer hacks
When you attend a very large event such as EMF Camp, there is so much going on that it is impossible to catch everything. It’s easy to come away feeling that you’ve missed all the good stuff, somehow you wasted your time, everyone else had complete focus and got so much more out of the event.
In an odd twist, one of the EMF 2016 talks people have been raving about is very relevant to that fear of inability to take in a festival programme. [Jessica Rose] gave a talk about imposter syndrome. A feeling of inadequacy compared to your peers and a constant anxiety at being exposed as a fraud that will probably be very familiar to many readers. As she points out, it’s a particularly cruel affliction in that it affects those people who do have all the skills while the real impostors share an inflated competence in their abilities.
This has significant relevance to many in our community and for a single presentation to get so many people talking about it at an event like EMF Camp means it definitely hit the mark. The full video is embedded below the break. At about half an hour long it’s well worth a look.
We haven’t specifically discussed Imposter Syndrome here on Hackaday before. But a closely related topic is social anxiety which sometimes prevents us from getting out of the basement of solitude to meet other excellent hackers. It’s good to remember that nobody bursts onto the scene as an elite guru and catching up with other people is a great way to pollinate ideas and learn new things.
If you would like to see more of this speaker, there is another YouTube video of a talk she gave at DevTalks Bucharest 2016, Automating Access to Development.
Filed under: cons
Chances are you’ve spent a lot of time trying to think of the next great project to hit your workbench. We’ve all built up a set of tools, honed our skills, and set aside some time to toil away in the workshop. This is all for naught without a really great project idea. The best place to look for this idea is where it can make life a little better.
I’m talking about Assistive Technologies which directly benefit people. Using your time and talent to help make lives better is a noble pursuit and the topic of the 2016 Hackaday Prize challenge that began this morning.
Assistive Technology is a vast topic and there is a ton of low-hanging fruit waiting to be discovered. Included in the Assistive Technology ecosystem are prosthetics, mobility, diagnostics for chronic diseases, devices for the aging or elderly and their caregivers, and much more. You can have a big impact by working on your prototype device, either directly through making lives better and by inspiring others to build on your effort.
Need some proof that this is a big deal? The winners of the 2015 Hackaday Prize developed a 3D printed mechanism that links electric wheelchair control with eye movement trackers called Eyedrivomatic. This was spurred by a friend of theirs with ALS who was sometimes stuck in his room all day if he forgot to schedule a caregiver to take him to the community room. The project bridges the existing technologies already available to many people with ALS, providing greater independence in their lives. The OpenBionics Affordable Prosthetic Hands project developed a bionic hand with a clever whiffletree system to enable simpler finger movement. This engineering effort brings down the cost and complexity of producing a prosthetic hand and helps remove some of the barriers to getting prosthetics to those who need them.
The Is the Stove Off project adds peace of mind and promotes safe independence through an Internet connected indicator to ensure the kitchen stove hasn’t been left on and that it isn’t turned on at peculiar times. Pathfinder Haptic Navigation reimagines the tools available to the blind for navigating their world. It uses wrist-mounted ultrasonic sensors and vibration feedback, allowing the user to feel how close their hands are to objects. Hand Drive is another wheelchair add-on to make wheeling yourself around a bit easier by using a rowing motion that doesn’t depend as much on having a strong hand grip on the chair’s push ring.
In most cases, great Assistive Technology is not rocket science. It’s clever recognition of a problem and careful application of a solution for it. Our community of hackers, designers, and engineers can make a big impact on many lives with this, and now is the time to do so.
Enter your Assistive Technology in the Hackaday Prize now and keep chipping away on those prototypes. We will look at the progress all of the entries starting on October 3rd, choosing 20 entries to win $1000 each and continue onto the finals. These finalists are eligible for the top prizes, which include $150,000 and a residency at the Supplyframe Design Lab, $25,000, two $10,000 prizes, and a $5,000 prize.The HackadayPrize2016 is Sponsored by:
Filed under: Hackaday Columns, The Hackaday Prize
This is a tale of old CPUs, intensive SMD rework, and things that should work but don’t.
Released in 1994, Apple’s Powerbook 500 series of laptop computers were the top of the line. They had built-in Ethernet, a trackpad instead of a trackball, stereo sound, and a full-size keyboard. This was one of the first laptops that looked like a modern laptop.
The CPU inside these laptops — save for the high-end Japan-only Powerbook 550c — was the 68LC040. The ‘LC‘ designation inside the part name says this CPU doesn’t have a floating point unit. A few months ago, [quarterturn] was looking for a project and decided replacing the CPU would be a valuable learning experience. He pulled the CPU card from the laptop, got out some ChipQuick, and reworked a 180-pin QFP package. This did not go well. The replacement CPU was sourced from China, and even though the number lasered onto the new CPU read 68040 and not 68LC040, this laptop was still without a floating point unit. Still, it’s an impressive display of rework ability, and generated a factlet for the marginalia of the history of consumer electronics.
Faced with a laptop that was effectively unchanged after an immense amount of very, very fine soldering, [quarterturn] had two choices. He could put the Powerbook back in the parts bin, or he could source a 68040 CPU with an FPU. He chose the latter. The new chip is a Freescale MC68040FE33A. Assured by an NXP support rep this CPU did in fact have a floating point unit, [quarterturn] checked the Mac’s System Information. No FPU was listed. He installed NetBSD. There was no FPU installed. This is weird, shouldn’t happen, and now [quarterturn] is at the limits of knowledge concerning the Powerbook 500 architecture. Thus, Ask Hackaday: why doesn’t this FPU work?
The holy scroll passed from Motorola to Freescale to NXP states just a few differences between the FPU-less 68LC040 and the FPU-equipped 68040. They’re both pin compatible with each other, object code compatible (except for FPU instructions), and timing requirements are the same. This should be a drop-in replacement. It isn’t, and there is no satisfactory explanation for why that’s the case.
The first step to diagnosing a problem is eliminating possible problems, and in this case it’s probably not a software issue. The CPU reports no FPU in both Mac OS and NetBSD. The FPU-equipped Powerbook 550 exists and the Mac OS uses the same Gestalt ID for the entire Powerbook 500 series. Since [quarterturn] tested the CPU under NetBSD, there’s probably nothing weird going on in the ROM, either; Mac ROMs are dedicated almost entirely to the Macintosh Toolbox and the Mac OS. The problem probably isn’t software.
According to the Motorola, Freescale, and NXP documents, the problem probably isn’t hardware. This is the second new CPU in this computer, and this time the CPU probably has a floating point unit. We can probably trust the NXP support rep. We can also trust the PCB that has been reworked several times over the course of this project. If it works with an FPU-less CPU, it should work with an FPU-equipped CPU.
Without an obvious solution to this problem apparent in the software, hardware, or even the ROM for this laptop, this project has turned into a mystery. Surely there’s some errata tucked away in a datasheet somewhere that will tell us what’s going on. There might be a handful of wizened Apple engineers who know what the problem is. The explanation to this problem is going to be very hard to find, which is why this project is an Ask Hackaday column. The comments are open, and the eventual answer will assuredly be very interesting.
Filed under: Ask Hackaday, Hackaday Columns
It’s the water-borne equivalent of building a minibike out of steel pipe and an old lawnmower engine. Except it’s a DIY personal watercraft made out of aluminum and an old chainsaw, and it has that same garage build feel – and the same disappointing results.
When we first saw the video below, we were hoping for one of those boats that let you water ski by yourself, or a wave-hopping, rooster tailing DIY jet ski. Alas, the chainsaw [MakeItExtreme] chose to power this boat is woefully underpowered, and the boat barely has enough oomph to make a wake. [MakeItExtreme] acknowledges the underwhelming results and mentions plans to fix the boat with a more powerful engine and a water jet drive rather than the trolling motor propeller they used. Still, whatever improvements they make will probably leverage the work they put into the hull, which is a pretty impressive display of metalwork. We’re used to seeing [MakeItExtreme] work in steel, so it was interesting to watch aluminum panels being cut, bent, and welded into a watertight hull. Looks like there’s plenty of room in there for more power, and we’re looking forward to version 2.0 of this build.
If you like rough and ready metalworking videos, there are plenty of them on [MakeItExtreme]’s YouTube channel. We’ve covered quite a few before, including this all-terrain hoverboard and a spot welder that’s more-or-less safe to use.
Filed under: transportation hacks
When was the last time you went to a library? If it’s been more than a couple of years, the library is probably a very different place than you remember. Public libraries pride themselves on keeping up with changing technology, especially technology that benefits the communities they serve. No matter your age or your interests, libraries are a great resource for learning new skills, doing research, or getting help with just about any task. After all, library science is about gathering together all of human knowledge and indexing it for easy lookup.
It doesn’t matter if you’re not a researcher or a student. Libraries exist to serve everyone in a class-free environment. In recent years, patrons have started looking to libraries to get their piece of the burgeoning DIY culture. They want to learn to make their lives better. Public libraries have stepped up to meet this need by adding new materials to their collections, building makerspaces, and starting tool libraries. And this is in addition to ever-growing collections of electronic resources. Somehow, they manage to do all of this with increasingly strained budgets.
The purpose of this article is to explore the ways that libraries of all stripes can be a valuable resource to our readers. From the public library system to the sprawling academic libraries on college campuses, there is something for hackers and makers at all levels.To the Library, and Step on It The Boston Public Library. Image via Boston Magazine
Before the Internet, before MSN Encarta, before Encyclopedia Britannica, there was the public library. Maybe the L word sends you back to elementary school when you were just tall enough to peek into the top drawers of the card catalog and thumb through the hand-typed entries. Maybe you’re young enough that you’ve always known the public library to have free Internet access and rows of computers available for playing Minecraft and updating Hackaday.io.
Some would argue that the internet and the rise of e-readers have made libraries an outdated concept or even obsolete. And libraries pride themselves on keeping with the times. In spite of their place in popular culture, libraries aren’t just about overdue books and being reprimanded for talking. They have offered popular movies and music alongside books for years (these days as streaming resources), and they continue to keep up with changing technology. Not only do they buy into new technologies as much as budgets allow, most public libraries offer free classes to help explain it.
Public libraries have long been a good starting place for the do-it-yourselfer. Just take look at the reference section sometime. You can find a service manual to fix your car or a copy of the latest National Electrical Code Handbook. Can’t afford the 3rd edition of The Art of Electronics or the latest Raspberry Pi cookbook? The library has your back. Those books we think you should read are in the stacks as well. Whether you prefer to read about a new skill or watch a DVD, a librarian can lead you to just the right material—it’s part of the thrill of library science. Librarians love information sharing and learning so much that it’s no surprise that they would use their limited budgets to make room for makerspaces.Making Space for Making The DC public library’s FabLab. Image via The Atlantic
I’m writing this from the newly expanded makerspace in the central branch of my local public library. A year or so ago, this library had a small makerspace set up in the middle of the main room. A healthy collection of books about making and robotics sat outside of it to draw people in. During the renovation project, the makerspace grew six times its size thanks in part to a generous donation from a locally based engineering firm. Now, most of the equipment you would expect to find in a makerspace is here—3D printers, a laser cutter, a CNC, sewing machines, hand tools, soldering irons—and it’s all completely free to use. And thanks to the aforementioned donation, the 3D printer filament is free to use, at least for now.
If you can’t justify the cost of a makerspace membership to print a one-off replacement doohickey for your washing machine or make a cutting board in the shape of your home state, get yourself to a library. Not only will you be able to finish your project for next to nothing, there will be plenty of knowledge and helpful people around to guide you.The Round Reading Room of King’s College London’s Maughan Library. Image via Wikipedia Let Me School You on Academic Libraries
Not a card-carrying student of your local college? Depending on the library, it may not matter. I’ve taken classes at the community college for years. I visit their impressive library as often as I can, whether I’m taking a class or not. Sometimes I go there just to bask in the knowledge around me or contemplate my existence in the deafening silence of the 2nd floor quiet room.
There are some important differences between public and academic libraries. Academic libraries are either general-knowledge or they cater to a specific discipline such as engineering or law. Compared with public libraries, academic libraries tend to have more of every type of material and usually have copies of textbooks used by the school. In the US, they use the Library of Congress classification system which allows for greater nuance in categorization within a given collection than the Dewey Decimal System. While public libraries offer access to databases like JSTOR and reference materials such as the Oxford English Dictionary, an academic library card is your ticket into many more databases and academic journals.Check Out This Chainsaw
In the last decade or so, public libraries in the United States have begun to offer many more types of materials to borrow, like Kill-A-Watt meters. Some offer more diverse objects like musical instruments, gaming consoles, fishing poles, and works of art. A handful of public libraries have even created separate tool libraries to supplement their offerings. Patrons can check out all kinds of tools and equipment just like they would a book or a DVD. Most libraries offer tool checkout at no additional charge—you just need to show your library card and maybe proof of residence.A well-organized collection at Toronto Tool Library
The are also community tool libraries out there that operate independently from the public library system. Here is a map of known tool libraries throughout the world. If there isn’t one in your town, consider starting one up. You might be surprised at the response and support that you get from the community.
We’ve all got projects that we’d like to start or work on or finish, if only we could borrow a spokeshave or a chainsaw or some inside calipers. By the same token, we probably all have something we could donate to a tool library, be it a simple screwdriver or a few hours a week of volunteer time.
If you’re not sure how to get started, here is a guide that was put together by West Seattle Tool Library. It has sample forms and documents that can be edited to fit your needs. Oh, and don’t forget to buy some liability insurance to protect yourself.
Tool libraries are a boon to the community no matter who runs them. When people have access to the things they need to be able to make home improvements and repairs or solve their own transportation problems, the world becomes a better place.
The mission of libraries hasn’t changed — they seek to provide access to information for all members of society. With the rise of freely available online information, libraries have modernized to include online access and continue to offer a conduit to that which isn’t free: audio and visual media, journal subscriptions, manuals and documentation, electronic devices, and yes, even tools. The public library deserves a higher rank in all of our mental lists of go-to resources and is a great place to donate some of our time and talent to contribute back to the local community.
Filed under: Featured, Interest, Original Art, slider
[ajw61185] made a video overview of a radio-controlled A-10 jet modified to spew a hail of harmless Nerf balls as it strafes helpless cardboard cutouts of T-72 tanks on a bright, sunny day.
The firing assembly in the jet comes from a Nerf Rival Zeus Blaster, which is itself an interesting device. It uses two electric flywheels to launch soft foam balls – much like a pitching machine. With one flywheel running a little faster than the other, the trajectory can be modified. For example, a slight topspin gives the balls a longer and more stable flight path. Of course, foam balls slow down quickly once launched and at high speeds the aircraft can overtake the same projectiles it just fired, but it’s fun all the same.
Cramming the firing assembly into aircraft took some cleverness. The front of the jet contains the flywheel assembly, and a stripped-down removable magazine containing the foam balls fits behind it. A flick of a switch on the controller spins up the flywheels, and another flick controls a servo that allows the balls to enter the firing assembly and get launched. The ammo capacity on the jet is low at only twelve shots per load, and it fires all twelve in roughly half a second. Since the balls are fired at the ground in a known area, they’re easy to retrieve.
Even better than a higher ammo capacity would be a first person view cockpit, but after cramming in a Nerf blaster there might not be much room left even in a model as large as the FreeWing A-10. On the other end of the scale for aircraft are models so tiny that even outfitting them with the bare essentials of controlled flight is an achievement.
Filed under: drone hacks