Syndicate content Hackaday
Fresh hacks every day
ถูกปรับปรุง 1 ชั่วโมง 17 min ก่อน

Marble Chooses its own Path

พุธ, 03/07/2018 - 07:00

[Snille]’s motto is “If you can’t find it, make it and share it!” and we could not agree more. We wager that you won’t find his Roball sculpture on any shopping websites, so it follows that he made, and subsequently shared his dream. The sculpture has an undeniable elegance with black brackets holding brass rails all on top of a wooden platform painted white. He estimates this project took four-hundred hours to design and build and that is easy to believe.

Our first assumption was that there must be an Arduino reading the little red button which starts a sequence. A 3D-printed robot arm grasps a cat’s eye marble and randomly places it on a starting point where it invariably rolls to its ending point. The brains are actually a Pololu Mini Maestro 12-channel servo controller. The hack is using a non-uniform marble and an analog sensor at the pickup position to randomly select the next track.

If meticulously bending brass is your idea of a good time, he also has a video of a lengthier sculpture with less automation, but it’s bent brass porn. If marbles are more your speed, you know we love [Wintergatan] and his Incredible Marble Music Machine. If that doesn’t do it for you, you can eat it.

Hacking a Pint-Sized Mercedes

พุธ, 03/07/2018 - 04:00

[Jonas] bought an electric Mercedes “ride on” toy for his one-year-old son. At least that’s his story. However, the vehicle has become a target for dad’s obsession with hacking and he’s already done quite a few upgrades. Even better, he did quite a bit of analysis on what’s already there. He isn’t done, but he’s promised quite a bit in the next installment which isn’t out yet.

The original car can take a driver or it can use remote control. [Jonas] has an ambitious list of ideas, some of which are still not complete:

  • Speed along with softer acceleration and braking
  • Improve the radio controller
  • Proper rubber tires
  • Proper stereo system
  • Individual brake disks on the front wheels
  • Improved horn
  • Proper seat belt or maybe even a new seat

There have also been some extra upgrades like the ignition key because — hey — kids love to play with keys, right? He also made the hood open properly with hinges so you can get in there to work, and a backup camera.

Unfortunately, the increased power cracked one of the plastic gears in the drive train. That will get replaced with something more suitable. It has all turned into a big project. From [Jonas’] blog:

I also decided this build will need welding. Having never welded aluminium, i ordered a 200Amp TIG/MMA welding machine, rebuilt the main fuse-box in my house to be able to run it without burning my house down. I am currently waiting for a tube of Argon gas before i can start climbing that hill.

Good thing his son needed that Mercedes. All kidding aside, we love to see consumer goods really customized like this and he seems well on his way to having quite the conversation piece.

Of course, you don’t need to rebuild your house wiring and buy a welder to customize a toy car in every case. On the other hand, you could hack a real car.

Clever Approach to Stylus Alignment

พุธ, 03/07/2018 - 02:30

Digitally stored music is just data. But not long ago, music was analog and required machines with moving parts. If you have never owned a record player, you at least know what they look like, now that there’s a(nother) vinyl revival. What you may not be aware of is that the player’s stylus needs to be aligned. It makes sense, that hypersensitive needle can’t be expected to perform well if it’s tearing across a record like a drift racer.

There are professional tools for ensuring alignment, but it’s not something you’ll need each day. [Ali Naci Erdem] shows us his trick for combining a printable template with a mirror to get the same results without the professional tool costs. Instead of ordinary printer paper, he prints the template on a piece of clear plastic and lays it across a small mirror. These are both items which can be picked up at a hobby store, which is not something we can say about a record player mirror protractor.

We love music hacks like this informative introduction to circuit bending, the wonderful [Martin] from Wintergatan, or if you want to get weird, an organ made from Furbies.

What’s Inside A Neonode Laser Sensor?

พุธ, 03/07/2018 - 01:00

Every once in a while, you get your hands on a cool piece of hardware, and of course, it’s your first instinct to open it up and see how it works, right? Maybe see if it can be coaxed into doing just a little bit more than it says on the box? And so it was last Wednesday, when I was at the Embedded World trade fair, and stumbled on a cool touch display floating apparently in mid-air.

The display itself was a sort of focused Pepper’s Ghost illusion, reflected off of an expensive mirror made by Aska3D. I don’t know much more — I didn’t get to bring home one of the fancy glass plates — but it looked pretty good. But this display was interactive: you could touch the floating 2D projection as if it were actually there, and the software would respond. What was doing the touch response in mid-air? I’m a sucker for sensors, so I started asking questions and left with a small box of prototype Neonode zForce AIR sensor sticks to take apart.

The zForce sensors are essentially an array of IR lasers and photodiodes with some lenses that limit their field of view. The IR light hits your finger and bounces back to the photodiodes on the bar. Because the photodiodes have a limited angle over which they respond, they can be used to triangulate the distance of the finger above the display. Scanning quickly among the IR lasers and noting which photodiodes receive a reflection can locate a few fingertips in a 2D space, which explained the interactive part of the floating display. With one of these sensors, you can add a 2D touch surface to anything. It’s like an invisible laser harp that can also sense distance.

The intended purpose is fingertip detection, and that’s what the firmware is good at, but it must also be the case that it could detect the shape of arbitrary (concave) objects within its range, and that was going to be my hack. I got 90% of the way there in one night, thanks to affordable tools and free software that every hardware hacker should have in their toolbox. So read on for the unfortunate destruction of nice hardware, a tour through some useful command-line hardware-hacking tools, and gratuitous creation of animations from sniffed SPI-like data pulled off of some test points.

Cracking Open the Case

In retrospect, it’s probably not necessary to take one of these things apart — the diagrams on the manufacturer’s website are very true to life. Inside, you’ll find a PCB with an IR laser and photodiode every 8 mm, some custom-molded plastic lenses, and a couple of chips. Still, here are the pretty pictures.

The lenses are neat, consisting of a 45-degree mirror that allows the PCB-mounted diodes to emit and receive out of the thickness of the bar. The lasers and photodiodes share lenses, reducing the manufacturing cost. Most of the thin PCB after the first three cells is “empty”, existing just to hold the laser and photodiode chips. It’s a nice piece of hardware.

The chip-on-board ICs aren’t even covered in epoxy — although these boards are marked “prototype” so who knows if this is true of the production units. According to the advertising copy, one of these two chips is a custom ASIC that does the image processing in real-time and the other is a custom STM32 ARM microcontroller. Speculate about which is which in the comments!

The PCB is glued in place under the metal frame, and there are certainly no user-serviceable parts inside. Sadly, some bond wires got pulled loose when I was removing the PCB. When I put this one sensor stick back together, an area near the custom ASIC gets hot. Sacrificed for my idle curiosity! Sigh.

The Basics: The USB Interface

I was given a prototype version of the sensor demo kit, which includes a breakout board for the USB and I2C finger-detection functionalities of the sensors, so of course, I had to test them out. Plugging it in, and typing dmesg showed the “Neonode AirModule 30v” as a USB HID device, which means that figuring everything out about its USB functionality is going to be a cakewalk because all the data it spits out is described in a data descriptor.

I used usbhid-dump to read in the descriptor, and [Frank Zhao]’s excellent descriptor decoder to get it all in plain text. It looks like it’s a standard digitizer that supports six fingers and has a vendor-defined configuration endpoint. Here’s the descriptor dump if you want to play along. Dumps like this are great starting points if you’re making your own USB HID device, by the way. See what functionalities your mouse has.

But frankly, pouring through a descriptor file is hard work. dmesg said the sensor was attached as /dev/usb/hiddev3, so why not see what it’s doing in real-time while I wave my finger around? sudo cat /dev/usb/hiddev3 | xxd takes the raw binary output and passes it through xxd which turns it into a “human-readable” hexdump. (The genius of xxd is the -r flag which turns your edited hexdump back into a binary, enabling 1990’s-era cracking on the command line.) Just by watching the numbers change and moving my finger, I figured out which field represented the finger count, and which fields corresponded to the 16-bit X- and Y-coordinates of each finger. And it’s reporting zeroes for the un-measured fingers, as per the descriptor.

Of course, all of this, as well as the complete specs for the I2C interface are available in the zForce online documentation. The commands are wrapped up in ASN.1 format, which is a dubious convenience. Still, if all you want to do is use these devices to detect fingers over USB or I2C, it’s just a matter of reading some documentation and writing a driver.

Logic Analyzer vs. Test Points

To have a little more fun with these sensor bars, I started poking around the test points exposed on the back of the unit. The set closest to the output connector are mostly duplicates of the pins on the connector itself, and aren’t that interesting. More fun are a constellation (Pleiades?) of seven test points that seem to only be available on the sensor bars that are longer than 180 mm.

One point had a clear 21 MHz clock signal running periodically, and two other lines had what seemed to be 10.5 MHz data, strongly suggesting some kind of synchronous serial lines. Two other pins in this area emitted pulses, probably relating to the start of a sensor sweep and the start of processed data, but that wouldn’t be obvious until I wired up some jumpers and connected a logic analyzer.

I really like the open-source Sigrok software for this kind of analysis. The GUI pulseview makes it fairly painless to explore signals that you don’t yet understand, while switching up to the command-line sigrok-cli for repetitive tasks makes some fairly sophisticated extensions easy. I’ll show you what I mean.

I started off with a Saleae Logic clone, based on the same Cypress FX2 chip. These are a great deal for $5 or so, and the decent memory depth and good Sigrok support makes up for the low 24 MHz sampling rate. That gave me a good overview of the signals and confirmed that the device goes into a low-power scan mode when no fingers are present, and that triggering when pin 5 went low isolated the bulk of the data nicely. But in order to actually extract whatever data was on the three synchronous serial pins, I needed to move up in speed.

The Openbench Logic Sniffer (OLS) will do 100 MHz, which is plenty for this application, but it has a very short 24 k sample memory that it has to relay back to Sigrok over a 115,200 baud serial line. Still, I could just squeeze a full read in at 50 MHz. Using Sigrok’s SPI decoders on the clock and two data lines gave me what looked like good data. But how to interpret it? What was it?

The Command Line, Graphics, and Real-Time Fingerwaving

Getting one capture out of pulseview is easy, but figuring out what that data means required building up a tiny toolchain. The command-line and sigrok-cli to the rescue:

sigrok-cli --driver=ols:conn=/dev/ttyACM3 --config samplerate=50m --samples 20k \ --channels 0-5 --wait-trigger --triggers 5=0 -o capture.sr

This command reads from the OLS on the specified serial port, waiting for a trigger when channel 5 goes low and outputs the data in Sigrok’s (zipped) data format.

sigrok-cli -i capture.sr -P spi:clk=0:miso=1:cpha=1 -B spi | tail -c +3 > spi1.bin sigrok-cli -i capture.sr -P spi:clk=0:miso=2:cpha=1 -B spi | tail -c +3 > spi2.bin

These two commands run the SPI decoders on the acquired data. It’s not necessary to do this in a separate step unless you’d like the output in two separate files, as I did. The -P flag specifies the protocol decoder, and -B tells it to output just the decoded data in binary. Tail aligns the data by dropping the three header bytes.

Now for the real trick: plotting the data, waving my finger around interactively, and hoping to figure out what’s going on. You’d be surprised how often this works with unknown signals.

t=`date +%s` convert -depth 8 -size 15x45+0 gray:spi1.bin -scale 200 out1_${t}.png convert -depth 8 -size 15x45+0 gray:spi2.bin -scale 200 out2_${t}.png convert out1_${t}.png spacer.png out2_${t}.png +append image_${t}.png convert out1_${t}.png spacer.png out2_${t}.png +append foo.png

Convert is part of the Imagemagick image editing and creation toolset. You can spend hours digging into its functionality, but here I’m just taking bytes from a file, interpreting them as grayscale pixels, combining them into an image of the specified dimensions, and scaling it up so that it’s easier to see. That’s done for each data stream coming out of the sensor.

The two are then combined side-by-side (+append) with a spacer image between them, timestamped, and saved. An un-timestamped version is also written out so that I could watch progress live, using eog because it automatically reloads an image when it changes.

Cobbling all of this together yields a flow that takes the data in from the logic analyzer, decodes it into bytes, and turns the bytes into images. That was enough for me to see that I’m capturing approximate position data from (probably) the output of Neonode’s custom ASIC. But why stop there? I turned the whole endeavor into a video by combining the images at 8 FPS:

ffmpeg -r 8 -pattern_type glob -i "image_*.png" \ -c:v libx264 -vf fps=8 -pix_fmt yuv420p finger_hand_movie.mp4


That’s me moving my finger just above the bar’s surface, and then out of range, and then moving one hand, and then both around in the frame. The frames with less data are what it does when nothing is detected — it lowers the scanning rate and apparently does less data processing. You can also see the reason for picking the strange width of 15 pixels in the images — there are 30 photodiodes in this bar, with data for 15 from one side apparently processed separately from the 15 on the other. Anyway, picking a width of 15 makes the images wrap around just right.

There’s a bunch of data I still don’t understand. The contents of the header and the blob that appears halfway down the scan are still a mystery. For some reason, the “height” field on the bottom side of the data is reversed from the top side — up is to the right in the top half and the left in the lower half.

But even with just what I got from dumping SPI data and plotting it out, it’s pretty apparent that I’m getting the post-processed X/Y estimate data, and it has no problems describing the shapes of simple objects, at least like the flat palm of my hand. It’s a lot richer dataset than I got from just the default finger sensor output, so I’ll call the hack a success so far. Maybe I’ll put a pair of these in a small robot, or maybe I’ll just make a (no-)touch-pad for my laptop.

Regardless, I hope you had as much fun reading along as I did hacking on these things. If you’re not a command-line ninja, I hope that I showed you some of the power that you can have by simply combining a bunch of tools together, and if you are, what are some of your favorite undersung data analysis tools like these?

The M1 NerfBot: When Prototypes Evolve

อังคาร, 03/06/2018 - 23:30

What do you get when you cross a self-taught maker with an enthusiasm for all things Nerf? A mobile nerf gun platform capable of 15 darts per second. Obviously.

The M1 NerfBot built by [GrimSkippy] — posting in the ‘Let’s Make Robots’ community — is meant to be a constantly updating prototype as he progresses in his education. That being the case, the progress is evident; featuring two cameras — a webcam on the turret’s barrel, and another facing forward on the chassis, a trio of ultrasonic sensors, controlled by an Xbox 360 controller, and streaming video to a webpage hosted on the M1 itself, this is no mere beginner project.

Perhaps most compelling is how the M1 tracks its targets. The cameras send their feeds to the aforementioned webpage and — with a little reorganization — [GrimSkippy] accesses the the streams on an FPV headset-mounted smartphone. As he looks about, gyroscopic data from the phone is sent back to the M1, translating head movement into both turret and chassis cam movement. Check it out!

Two relays control the Nerf gun’s firing mechanism, firing in semi-auto, three, and five-round bursts. Those ultrasonic sensors cause the controller to rumble when within six inches of an object, and cause the M1 to stop outright when within two inches.

Altogether, the robot army grows by one more capable ‘bot — not that they need any more.

Retrotechtacular: A Very British MagLev

อังคาร, 03/06/2018 - 22:01

When we look back to the 1970s it is often in a light of somehow a time before technology, a time when analogue was still king, motor vehicles had carburettors, and telephones still had rotary dials.

In fact the decade had a keen sense of being on the threshold of an exciting future, one of supersonic air travel, and holidays in space. Some of the ideas that were mainstream in those heady days didn’t make it as far as the 1980s, but wouldn’t look out of place in 2018.

The unlikely setting for todays Retrotechtacular piece is the Bedford Levels, part of the huge area of reclaimed farmland in the east of England known collectively as the Fens. The Old Bedford River and the New Bedford River are two straight parallel artificial waterways that bisect the lower half of the Fens for over 20 miles, and carry the flood waters of the River Ouse towards the sea. They are several hundred years old, but next to the Old Bedford River at their southern end are a few concrete remains of a much newer structure from 1970. They are all that is left of a bold experiment to create Britain’s first full-sized magnetic levitating train, an experiment which succeeded in its aim and demonstrated its train at 170 miles per hour, but was eventually canceled as part of Government budget cuts.

A track consisting of several miles of concrete beams was constructed during 1970 alongside the Old Bedford River, and on it was placed a single prototype train. There was a hangar with a crane and gantry for removing the vehicle from the track, and a selection of support and maintenance vehicles. There was an electrical pick-up alongside the track from which the train could draw its power, and the track had a low level for the hangar before rising to a higher level for most of its length.

After cancellation the track was fairly swiftly demolished, but the train itself survived. It was first moved to Cranfield University as a technology exhibit, before in more recent years being moved to the Railworld exhibit at Peterborough where it can be viewed by the general public. The dream of a British MagLev wasn’t over, but the 1980s Birmingham Airport shuttle was hardly in the same class even if it does hold the honour of being the world’s first commercial MagLev.

We have two videos for you below the break, the first is a Cambridge Archaeology documentary on the system while the second is a contemporary account of its design and construction from Imperial College. We don’t take high-speed MagLevs on our travels in 2018, but they provide a fascinating glimpse of one possible future in which we might have.

It does make one wonder: will the test tracks for Hyperloop transportation break the mold and find mainstream use or will we find ourselves 50 years from now running a Retrotechtacular on abandoned, vacuum tubes?

AM Ultrasonic Transmitter And Receiver

อังคาร, 03/06/2018 - 19:00

Most often ultrasonic transducers are used for distance measurements, and in the DIY world, usually as a way for robots to detect obstacles. But for a weekend project, [Vinod.S] took the ultrasonic transmitter and receiver from a distance-meter module and used amplitude modulation to send music ultrasonically from his laptop to a speaker, essentially transmitting and receiving silent, modulated sounds waves.

The transmitter and receiver

For the transmitter, he turned an Arduino Pro Micro into a USB sound card which he could plug into his laptop. That outputs both the audio signal and a 40 kHz carrier signal, implemented using the Arduino’s Timer1. Those go to a circuit board he designed which modulates the carrier with the audio signal using a single transistor and then sends the result out the ultrasonic transmitter.

He took care to transmit a clear signal by watching the modulated wave on an oscilloscope, looking for over-modulation and clipping while adjusting the values of resistors located between the transistor, a 5 V from the Arduino and the transmitter.

He designed the receiver side with equal care. Conceptually the circuit there is simple, consisting of the ultrasonic receiver, followed by a transistor amplifier for the modulated wave, then a diode for demodulation, another transistor amplifier, and lastly a class-D amplifier before going to a speaker.

Due to the low 40 kHz carrier frequency, the sound lacks the higher audio frequencies. But as a result of the effort he put into tuning the circuits, the sound is loud and clear. Check out the video below for an overview and to listen to the sound for yourself. Warning: Before there’s a storm of comments, yes the video’s shaky, but we think the quality of the hack more than makes up for it.

What else can you do with the ultrasonic transducers? You could wire a bunch of them to a Raspberry Pi to make a piano interface. Or you could use a larger transducer to make an affordable ultrasonic soldering iron.

The Sensor Array That Grew Into a Robot Cat

อังคาร, 03/06/2018 - 16:00

Human brains evolved to pay extra attention to anything that resembles a face. (Scientific term: “facial pareidolia”) [Rongzhong Li] built a robot sensor array with multiple emitters and receivers augmenting a Raspberry Pi camera in the center. When he looked at his sensor array, he saw the face of a cat looking back at him. This started his years-long Petoi OpenCat project to build a feline-inspired body to go with the face.

While the name of the project signals [Rhongzhong]’s eventual intention, he has yet to release project details to the open-source community. But by reading his project page and scrutinizing his YouTube videos (a recent one is embedded below) we can decipher some details. Motion comes via hobby remote-control servos orchestrated by an Arduino. Higher-level functions such as awareness of environment and Alexa integration are handled by a Raspberry Pi 3.

The secret (for now) sauce are the mechanical parts that tie them all together. From impact-absorption spring integrated into the upper leg to how its wrists/ankles articulate. [Rongzhong] believes the current iteration is far too difficult to build and he wants to simplify construction before release. And while we don’t have much information on the software, the sensor array that started it all implies some level of sensor fusion capabilities.

We’ve seen lots of robotic pets, and for some reason there have been far more robotic dogs than cats. Inspiration can come from Boston Dynamics, from Dr. Who, or from… Halloween? We think the lack of cat representation is a missed opportunity for robotic pets. After all, if a robot cat’s voice recognition module fails and a command is ignored… that’s not a bug, it’s a feature of being a cat.

[via TheNextWeb]

Love Open Source but Hate People? Get OpenKobold

อังคาร, 03/06/2018 - 13:00

[Tadas Ustinavičius] writes in to tell us of his latest project, which combines his two great loves of open source and annoying people: OpenKobold. Named after the German mythical spirit that haunts people’s homes, this tiny device is fully open source (hardware and software) and ready to torment your friends and family for up to a year on a CR1220 battery.

The design of the OpenKobold is quite simple, and the open source nature of the project makes this an excellent case study for turning an idea into a fully functional physical object.

Beyond the battery and the buzzer module, the OpenKobold utilizes a PIC12F675, a transistor, and a few passive components. This spartan design allows for a PCB that measures only 25 x 20 mm, making it very easy to hide but fiendishly difficult to try to track down later on.

But the real magic is in the software. The firmware that [Tadas] has written for the PIC not only randomizes how often the buzzer goes off, but how long it will sound for. This makes predicting the OpenKobold with any sort of accuracy very difficult, confounding the poor soul who’s searching their home or office for this maddening little device.

Hackers have a long and storied history of creating elaborate pranks, putting the OpenKobold in very good company. From randomly replaying signals from a remote control to building robotic cardboard burglars, we’ve seen our fair share of elaborate pranks from the community.

Building A Bioactive Vivarium From An IKEA Shelf

อังคาร, 03/06/2018 - 10:00

Pets are often worth a labour of love. [leftthegan] — in want of a corn snake — found that Sweden’s laws governing terrarium sizes made all the commercial options to too small for a fully-grown snake. So they took matters into their own hands, building a bioactive vivarium for their pet!

[leftthegan] found an IKEA Kallax 4×4 shelving unit for a fair price, and after a few design iterations — some due to the aforementioned regulations — it was modified by adding a shelf extension onto the front and cutting interior channels for cabling. For the vivarium’s window, they settled on plexiglass but strongly recommend glass for anyone else building their own as the former scratches and bends easily — not great if their snake turns out to be an escape artist! In the interim, a 3D printed handle works to keep the window closed and locked.

Throughout this build, [leftthegan] has kept the potential of future disassembly in mind, so all the interior surfaces have been individually coated in a layer of vinyl to keep moisture away from the MDF, and the heat lamp and LED lighting has connectors for easy separation.

After coating the bottom of the vivarium with pond liner and a generous amount of silicone, they added leca pebbles as a drainage layer with insect netting over top to keep the custom mix of soil substrate separated. They also added oak leaves — which are reptile safe — and some assorted plants alongside the branches and rocks from their snake’s previous habitat to make it feel like home. The waste cleanup crew for this vivarium is two cultures of springtails and a collection of tropical isopods to minimize the maintenance of the enclosure. The vivarium’s various electronics rest inside one of the shelf’s cubbies, while the rest are filled with storage boxes.

[leftthegan]’s snake seems happy for now, so the next logical step is to automate all the things.

[Via /r/DIY]

Portable Guitar Amp – Is That A Linux In Your Pocket?

อังคาร, 03/06/2018 - 07:00

When it comes to music production and audio engineering, Linux isn’t the most common choice. This isn’t for lack of decent tools or other typical open source usability issues: Ardour as a highly capable, feature-rich digital audio workstation, the JACK Audio Connection Kit for powerful audio routing, and distributions like Ubuntu Studio packing all the essentials nicely together, offer a great starting point as home recording setup. To add variation to your guitar or bass arrangement on top of that, guitarix is a virtual amp that has a wide selection of standard guitar effects. So when [Arnout] felt that his actual guitar amp’s features were too limiting, he decided to build himself a portable, Linux-based amp.

[Arnout] built the amp around an Orange Pi Zero with an expansion board providing USB ports and an audio-out connector, and powers it with a regular USB power bank to ensure easy portability. A cheap USB audio interface compensates the lacking audio-in option, and his wireless headphones avoid too much cable chaos while playing. The amp could theoretically be controlled via a MIDI pedalboard, but [Arnout] chose to use guitarix’s JSON API via its built-in Python web interface instead. With the Orange Pi set up as WiFi hotspot, he can then use his mobile phone to change the effect settings.

One major shortcoming of software-based audio processing is signal latency, and depending on your ear, even a few milliseconds can be disturbingly noticeable. To keep the latency at a minimum, [Arnout] chose to set up his Orange Pi to use the Linux real-time kernel. Others have chosen a more low-level approach in the past, and it is safe to assume that this won’t be the last time someone connects a single-board computer to an instrument. We surely hope so at least.

One Man’s Quest for a Desktop Spherical Display

อังคาร, 03/06/2018 - 04:00

[Nirav Patel] is a man on a mission. Since 2011 he has been obsessed with owning a spherical display, the kind of thing you see in museums and science centers, but on a desktop scale. Unfortunately for him, there hasn’t been much commercial interest in this sort of thing as of yet. Up to this point, he’s been forced to hack up his own versions of his dream display.

That is until he heard about the Gakken Worldeye from Japan. This device promised to be exactly what he’s been looking for all these years, and he quickly snapped up two of them: one to use, and one to tear apart. We like this guy’s style. But as is often the case with cheap overseas imports, the device didn’t quite live up to his expectations. Undaunted by the out of the box performance of the Worldeye, [Nirav] has started documenting his attempts to improve on the product.

These displays work by projecting an image on the inside of a frosted glass or plastic sphere, and [Nirav] notes that the projection sphere on the Worldeye is actually pretty decent. The problem is the electronics, namely the anemic VGA resolution projector that’s further cropped down to a 480 pixel circle by the optics. Combined with the low-quality downsampling that squashes down the HDMI input, the final image on the Worldeye is underwhelming to say the least.

[Nirav] decided to rip the original projector out of the Worldeye and replace it with a Sony MP-CL1 model capable of a much more respectable 1280×720. He came up with a 3D printed bracket to hold the MP-CL1 in place, and has put the files up on Thingiverse for anyone who might want to play along at home. The results are better, but unfortunately still not great. [Nirav] thinks the sphere is physically too small to support the higher resolution of the MP-CL1, plus the optics aren’t exactly of the highest quality to begin with. But he’s just glad he didn’t have to build this one from scratch.

Going back to our first coverage of his DIY spherical display in 2012, we have to say his earliest attempts are still very impressive. It looks like this is a case of the commercial market struggling to keep up with the work of independent hackers.

Customising a $30 IP Camera For Fun

อังคาร, 03/06/2018 - 02:30

WiFi cameras like many other devices these days come equipped with some sort of Linux subsystem. This makes the life of a tinkerer easier and you know what that means. [Tomas C] saw an opportunity to mod his Xiaomi Dafang IP camera which comes configured to work only with proprietary apps and cloud.

The hack involves voiding the warranty by taking the unit apart and installing custom firmware onto it. Photos posted by [Tomas C] show the mainboard powered by an Ingenic T20 which is a popular IP Camera processor featuring some image and video processing sub-cores. Upon successful flashing of the firmware, the IP camera is now capable of a multitude of things such as remote recording and playback which can be configured using the web UI as documented by [Tomas C]

We did a little more digging on the custom firmware and discovered that the original author of the custom firmware, [EliasKotlyar] has done a lot of work on this project. There are loads of images of the teardown of a camera and an excellent set of documentation of how he made the hack. Everything from adding serial headers, getting root access, dumping the firmware and even toolchain links are given on the page. This is extremely handy for a newbie looking to get into the game.

And IP Cameras are not of the only hackable hardware out in the wild. There are other devices that are running Linux based firmware such as the Wifi SD Cards that run OpenWRT. Check out the essential guide to compiling OpenWRT from source if you are looking to get started with your next IP Camera hack.

Thanks for the tip [Orlin82]

Controlling OctoPrint on the Go

อังคาร, 03/06/2018 - 01:00

Not too long ago I took the plunge into the world of OctoPrint by shoehorning a Raspberry Pi Zero into a PrintrBot Play, and I have to say, the results were quite impressive. OctoPrint allows you to run your 3D printer untethered from your computer, but without all the downsides of printing off of an SD card. Generally running off of a Raspberry Pi, OctoPrint serves up a very capable web interface that gives you full control over slicing and printing from essentially any device with a modern browser.

That’s all well and good if you’ve got your laptop with you, or you’re sitting at your desktop. But what if you’re out of the house? Or maybe out in the garage where you don’t have a computer setup? OctoPrint is still happily serving up status information and a control interface, you just don’t have a computer to access it. Luckily, there are options for just that scenario.

In this post we’re going to take a look at a couple of options for controlling and monitoring OctoPrint from your mobile device, which can help truly realize its potential. Personally I have an incredible amount of anxiety when leaving a 3D printer running a long job, and in the past I’ve found myself checking every 10 minutes or so to see if it was done. Now that I can just glance at my phone and see an ETA along with status information about the machine, it’s given me the confidence to run increasingly longer and complex prints.


Surely the most popular option for controlling OctoPrint from a mobile device is TouchUI by [Paul de Vries], which is available in the official OctoPrint plugin repository. TouchUI optimizes the standard OctoPrint web interface for smaller devices and, as the name suggests, touch screens. It isn’t just limited to smartphones and tablets either, it’s not unheard of to run it on a small TFT touch screen built directly into the printer. Similar to what PrintrBot has done with their new Simple Pro printer.

Using TouchUI is simple enough, just install the plugin through the OctoPrint interface and then open the printer’s IP or hostname into the browser on your mobile device. The server will automatically detect you aren’t on a desktop machine, and load the TouchUI interface.

Being designed for low-resolution touch screens, TouchUI has a fairly chunky layout. It also uses highly contrasting colors to help visibility on devices which may not have very good screens to begin with. The layout is very logical, and it has pretty much everything you need to keep tabs on the current print.

That being said, TouchUI does take the concept a bit to the extreme. As I mentioned previously, this interface is sometimes used on small (under three inch) displays with very low resolution. That’s a far cry from a current generation smartphone, which is likely to have at least a 720p display on even the most low end of devices. Accordingly, I found TouchUI to be a bit too simplistic; there’s just a lot of wasted space in the layout. It’s certainly usable, but doesn’t exactly impress.


At the time of this writing, Printoid Pro is the highest rated OctoPrint client application on Android. This is a paid application, but a free version is also available. In fact, to really complicate things there are actually two paid versions: Pro and Premium. Without getting into the confusing state of paid smartphone software, it’s enough to say that the free version does pretty much everything you need, while the Pro version adds in some interesting customization and scripting for the OctoPrint server-side that could be useful if you’re looking for additional automation.

I really like the amount of information that Printoid manages to pack into a single screen. The other OctoPrint clients I’ve tried separate the different functions (file management, temperature control, extruder motion, etc) into their own tabs in the interface. Of course that’s all the rage right now in UI design, but it means you have to do a lot of horizontal swiping to see everything. But in the Printoid UI, almost every function of OctoPrint is compressed into the same view.

It’s a bit cramped, and admittedly might seem a bit overwhelming, but I think they’ve done an excellent job all things considered. I really like way Printoid expresses the time remaining on the print job, not only with a big countdown timer, but bar graphs to show progress through not only the physical operation but the GCode file that’s being parsed. The cost estimate is a nice touch too, on the off chance you want to know exactly how many pennies you’re burning up each time you print out a Benchy.


The second highest rated OctoPrint client for Android is OctoRemote. Refreshingly, it’s completely free and does not have any ads, though there is a button to donate to the developer if you want. I was happy to see a prominent listing of the Apache-licensed libraries it uses in the menus, but as far as I can tell OctoRemote itself is not open source.

Interface wise, OctoRemote is basically a refined version of TouchUI. It fully embraces the modern “Material Design” that Google is so obsessed with in the latest versions of Android, which gives it a very “native” look compared to other clients. There’s a considerable amount of horizontal swiping required to perform tasks, which can be annoying. For example, if you wanted to heat up the extruder and push out some filament (I.E. for changing filament colors), you would need to swipe between three separate tabs.

That said, I do like how faithfully this reproduces the functionality of stock OctoPrint. The general layout and options are nearly identical to those in the OctoPrint web interface, so it’s a very easy transition if you’ve become used to expecting certain options to be under particular menus and that sort of thing.

Probably my favorite feature of OctoRemote is the big “Upload” button which you use to push files to the printer. The other clients of course feature a similar function, but for whatever reason they downplay it considerably. The whole point of OctoPrint is to be able to push jobs to your printer over the network, so it seems only logical that it should be front and center in the user interface.

Final Thoughts

Personally, I’ve become quite fond of Printoid. The interface might look a bit like the control panel for a spaceship compared to the more minimalist approach used in other clients, but I appreciate having everything immediately accessible. But OctoRemote does strike a very compelling balance between the minimalism and functionality if Printoid throws a bit too much at you.

That said, TouchUI is still an excellent option if you don’t want to install a native application. While its interface is perhaps not perfectly suited to modern smartphones, it absolutely gets the job done. Installing it just takes a few clicks in the OctoPrint settings, and its large user base means there’s plenty of community support.

There are quite a few other OctoPrint clients for Android, and at least a couple for iOS if you’re into that sort of thing. The goal here wasn’t to be an exhaustive test of all the available options, but to simply highlight some of the most popular ones in use right now. Of course, we’d be interested in hearing what the Good Readers of Hackaday are using in the comments below.

3D Printing with Mussels and Beets

จันทร์, 03/05/2018 - 23:30

What do you get when you combine oven-baked mussels and sugar beets in a kitchen blender? No, it isn’t some new smoothie cleanse or fad diet. It’s an experimental new recyclable 3D printing material developed by [Joost Vette], an Industrial Design Engineering student at Delft University of Technology in the Netherlands. While some of the limitations of the material mean it’s fairly unlikely you’ll be passing over PLA for ground-up shellfish anytime soon, it does have a few compelling features worth looking into.

Joost Vette

For one thing, it’s completely biodegradable. PLA is technically biodegradable as it’s usually made primarily of cornstarch, but in reality, it can be rather difficult to break down. Depending on the conditions, PLA could last years exposed to the elements and not degrade to any significant degree. But [Joost] says his creation degrades readily when exposed to moisture; so much so that he theorizes it could have applications as a water-soluble support material when printing with a multiple extruder machine.

What’s more, after the material has been dissolved into the water, it can be reconstituted and put back into the printer. Failed prints could be recycled directly back into fresh printing material without any special hardware. According to [Joost], this process can be repeated indefinitely with no degradation to the material itself, “A lot of materials become weaker when recycled, this one does not.

So how can you play along at home? The first challenge is finding the proper ratio between water, sugar, and the powder created by grinding up mussel shells necessary to create a smooth paste. It needs to be liquid enough to be extruded by the printer, but firm enough to remain structurally sound until it dries out and takes its final ceramic-like form. As for the 3D printer, it looks like [Joost] is using a paste extruder add-on for the Ultimaker 2, though the printer and extruder combo itself isn’t going to be critical as long as it can push out a material of the same viscosity.

We’ve seen a number of DIY paste extruder mods for 3D printers, which is a good starting point if you’re getting sick of boring old plastic. Before long you might find yourself printing with living tissue.

[Thanks to Mynasru for the tip]

Badgelife: From 1 To 100

จันทร์, 03/05/2018 - 22:00

Blame it on the falling costs of printed circuit boards, the increased accessibility of hardware design tools, the fact that GCC works on microcontrollers now, whatever the ‘maker movement’ is, or any one of a number of other factors. There’s a hardware demoscene now. Instead of poking bits, writing code, and dividing by zero to create impressive multimedia demonstrations on a computer, there is a small contingent of very creative people who are building their own physical hardware, just for the hell of it. They’re pushing boundaries of what can be done with hardware design, demonstrating manufacturing know-how, and turning a (small) profit while doing it. This is badgelife, the tiny subculture dedicated to creating custom electronic conference badges.

At Hackaday, we’ve been doing a deep dive into the rigors of this demoscene of hardware, and last week we had the pleasure of hosting a meetup with some of the big players of the badgelife community as guests of honor. There were, of course, talks discussing the trials and tribulations of designing, manufacturing, and shipping hundreds of pieces of hardware on a limited budget with not enough time. If you want to know how hard electronic design and manufacturing can be, you want to check out these talks below.

Our first guest for the Hardware Developers Didactic Galactic last week was [Kerry Scharfglass]. On paper, he’s an electrical engineer who sometimes does some coding and has consulted on everything from medical devices to Internet of Things door locks. Perhaps more interestingly, he paid for his trip to Defcon last year by building neat hardware that people wanted.

We took a look at [Kerry]’s badge last summer, calling it the Diamond Age badge, the Drummer’s Badge, a Sympetrum, but it’s best known as the dragonfly badge. In the middle of this dragonfly-shaped PCB is an STM32, with ten APA102 RGB LEDs placed around the perimeter. There are IR LEDs studded around the body, and when a few badges are pointing at each other, the RGB fading syncs up. It’s an homage to Neil Stephenson and a cool bit of electronics, too.

While we got the story of the dragonfly badge existing last summer, we didn’t get the low-down about how this badge came to be. Following on the tail of other popular badges, [Kerry] simply decided he should build a badge. The wish list of features included lots of blinky, wireless communication, rechargeable batteries that last the entire conference, a crypto puzzle, alcohol sensors, a phone app, and a way of interacting with other badges. Common sense slowly drifted into [Kerry]’s manufacturing plan, and he whittled down these requirements to an attainable goal: just some blinkies with AA batteries.

After several prototypes, [Kerry] got the schematic down and slowly transitioned over to the dragonfly-based badge. There are some clever manufacturing tricks on this badge: there are holes for both AA and AAA battery holders, and solderable jumpers for the clock and data lines on the APA102 LEDs. If one LED fails, you can just rip it out and put a solder bridge in. There’s no way you’re going to be reworking badges on the conference floor.

Was this dragonfly badge a success? Hugely so. With 25,000 hackers at Defcon, there were more than enough people who wanted to throw down some cash and get their hands on some sweet, sweet blinky bling. This badge paid for [Kerry]’s trip to Defcon, with a little left over to pay for a few prototypes for this year’s badge.

Building a badge isn’t just something some random guy does. Possibly the most successful badges are those put together by the villages at Defcon. The car hacking badge is always spectacular (and always plugs into your car), the DC Darknet badge last year was beautiful, and the Crypto and Privacy Village badge was an amazing demonstration of supply chain logistics.

[Whitney Merrill] is one of the brains behind the Crypto and Privacy Village. A few years ago, she noticed independent badges floating around Defcon, and along with [supersat] and a few of her other compatriots, decided to create their own badge from scratch. They’ve been doing this for a few years now, and they’ve learned a lot of lessons so far.

The Crypto and Privacy Village badge for last year was an amazing padlock with rotary encoders bedazzled with beetles. The electronics are based on the ESP-32, and there’s a backlit display, cap touch sensors, and a headphone jack to listen to Defcon radio all weekend. Getting to this point wasn’t easy; it involved receiving boxes of Beats headphones loaded up with batteries (thanks, China), tracking ships entering the port of Long Beach, professional assembly, semi-professional rework, and foil-embossed boxes.

These are not talks to miss. Anyone can build one of something, but building dozens or hundreds of a thing is something else entirely. There are logistics, there’s manufacturing, and somehow or another, you’ll need to get those badges out to Vegas. The cheapest way to do this, by the way, is to buy a casket, pay off a funeral home, and get the special air freight rates corpses get.

Shoot-And-Forget Digital Photo Frame

จันทร์, 03/05/2018 - 19:00

Digital photo frames these days require you to manage the photos stored on it or the cloud-based service tied to the frame’s manufacturer. [Henric Andersson] realized that he and his wife take a lot of photos but find little time to go through them — like photo albums of days past — and add them to any photo frame-like appliance or service. Since Google photos can do a lot of the sorting for them, he decided to incorporate that into a digital photo frame.

Using his wife’s old Viewsonic 24” 1080p monitor, he cracked it open and incorporated the screen into a 24×16 distressed wood frame — reinforcing it to account for the bulky, built-in power supply with pieces of HDF and a lot of glue. The brains behind this digital photo frame is a Raspberry Pi 3 he received from a friend. To turn the whole on/off, he built a small circuit but it turned out it wasn’t strictly necessary since everything started just fine without it.

While functionally complete, it needed one more addition. A little thing called ‘color temperature calibration’ — aka white balance.

Finding the TCS34725 RGB color sensor by Adafruit — and readily available code for easy integration — [Andersson] puzzled over how to add it to the frame. To disguise it while retaining its effectiveness, he had to glue it to the rear of the frame after drilling a hole in the top piece and sticking a plastic stick through the hole to let light through to the sensor.

To get the photos to display, [Henric Andersson] says all he did was add a few queries to Google Photos and it will display all your relevant photos that have been synced to the service. For a breakdown of that side of this hack, check out his other post with the details.

While Google Photos deftly displays photos of various orientations, sizes, and aspect ratios, we’ve featured a digital photo frame that handles the task a little differently.

FPGA Makes ASCII Video

จันทร์, 03/05/2018 - 16:00

Human beings like pictures which is probably why there’s the old adage “A picture’s worth a thousand words.” We take computer graphic output for granted now, but even in the earliest days for Teletypes and line printers, there was artwork made from characters ranging from Snoopy to Spock. [Wenting Z] continues the tradition by creating an FPGA that converts VGA video to ASCII art and outputs it via DVI.

The device uses a Xilinx Virtex device and uses about 500 LUT (look up tables) which is not much at all. You can see a video (that includes an overlay of the source video) of the device in action below.

In fact, we think of art like this as a computer phenomenon, but [Flora Stacey] created a butterfly on a typewriter in 1898 and ham radio operators were doing art using paper tape for the last half of the twentieth century. Even before that, In 1865, Alice in Wonderland had a certain passage that was typeset to suggest a mouse’s tail. Perhaps the pinnacle is the famous ASCII version of Star Wars.

This is decidedly less mechanical than some of the other ASCII art projects we’ve seen. If you have a taste for more text art, have a look at some other examples, including a very old advertisement that uses character art.

Graphing Calculator Dual Boots With Pi Zero

จันทร์, 03/05/2018 - 13:00

The nearly limitless array of consumer gadgets hackers have shoved the Raspberry Pi into should really come as no surprise. The Pi is cheap, well documented, and in the case of the Pi Zero, incredibly compact. It’s like the thing is begging to get grafted into toys, game systems, or anything else that could use a penguin-flavored infusion.

But this particular project takes it to the next level. Rather than just cramming the Pi and a cheap LCD into his Numworks graphing calculator, [Zardam] integrated it into the device so well that you’d swear it was a feature from the factory. By exploiting the fact that the calculator has some convenient solder pads connected to its SPI bus, he was able to create an application which switches the display between the Pi and the calculator at will. With just a press of a button, he’s able to switch between using the stock calculator software and having full access to the internal Pi Zero.

In a very detailed write-up on his site, [Zardam] explains the process of getting the Pi Zero to output video over SPI. The first part of the battle was re-configuring the GPIO pins and DMA controller. After that, there was the small issue of writing a Linux SPI framebuffer driver. Luckily he was able to find some work done previously by [Sprite_TM] which helped him get on the right track. His final driver is able to push 320×240 video at 50 FPS via GPIO, more than enough to kick back with some DOOM.

With video sorted out, he still needed a way to interface the calculator’s keyboard with the Pi. For this, he added a function in his calculator application that echoed the keys pressed to the calculator’s UART port. This is connected to the Pi, where a daemon is listening for key presses. The daemon then generates the appropriate keycodes for the kernel via uinput. [Zardam] acknowledges this part of the system could do with some refinement, but judging by the video after the break, it works well enough for a first version.

We’ve seen the Pi Zero get transplanted into everything from a 56K modem to the venerated Game Boy, and figured nothing would surprise us at this point. But we’ve got to say, this is one of the cleanest and most practical builds we’ve seen yet.

[Thanks to EdS for the tip]

Behold the Giant Eye’s Orrery-Like Iris and Pupil Mechanism

จันทร์, 03/05/2018 - 10:00

This is an older project, but the electromechanical solution used to create this giant, staring eyeball is worth a peek. [Richard] and [Anton] needed a big, unblinking eyeball that could look in any direction and their solution even provides an adjustable pupil and iris size. Making the pupil dilate or contract on demand is a really nice feature, as well.

The huge fabric sphere is lit from the inside with a light bulb at the center, and the iris and pupil mechanism orbit the bulb like parts of an orrery. By keeping the bulb in the center and orbiting the blue gel (for the iris) and the opaque disk (for the pupil) around the bulb, the eye can appear to gaze in different directions. By adjusting the distance of the disks from the bulb, the size of the iris and pupil can be changed.

A camera system picks out objects (like people) and directs the eye to gaze at them. The system is clever, but the implementation is not perfect. As you can see in the short video embedded below, detection of a person walking by lags badly. Also, there are oscillations present in the motion of the iris and pupil. Still, as a mechanism it’s a beauty.

In the video you can see the eye lag behind detecting the person, but you can also see the iris and pupil appear to bounce as they move. This can happen if the motors are too weak or the load is awkward; the motors end up constantly overshooting then overcorrecting as they struggle to move a load that is too heavy or poorly balanced for them. Slow and steady movement is probably fine, but faster and bigger movements will oscillate more.

We recently saw another eyeball project from [Richard] with his flipping eyeball broach, which is like a really stylish 1×1 flip-dot display. And if you happen to detest eyes for some reason, you’re probably interested in this robot whose entire purpose is to find eyeballs, then shine a laser into them.