Hackaday

Syndicate content Hackaday
Fresh hacks every day
ถูกปรับปรุง 4 hours 54 min ก่อน

Sort Faster with FPGAs

พุธ, 01/20/2016 - 22:01

Sorting. It’s a classic problem that’s been studied for decades, and it’s a great first step towards “thinking algorithmically.” Over the years, a handful of sorting algorithms have emerged, each characterizable by it’s asymptotic order, a measure of how much longer an algorithm takes as the problem size gets bigger. While all sorting algorithms take longer to complete the more elements that must be sorted, some are slower than others.

For a sorter like bubble sort, the time grows quadradically longer for a linear increase in the number of inputs; it’s of order O(N²).With a faster sorter like merge-sort, which is O(N*log(N)), the time required grows far less quickly as the problem size gets bigger. Since sorting is a bit old-hat among many folks here, and since O(N*log(N)) seems to be the generally-accepted baseline for top speed with a single core, I thought I’d pop the question: can we go faster?

In short — yes, we can! In fact, I’ll claim that we can sort in linear time, i.e a running time of O(N). There’s a catch, though: to achieve linear time, we’ll need to build some custom hardware to help us out. In this post, I’ll unfold the problem of sorting in parallel, and then I”ll take us through a linear-time solution that we can synthesize at home on an FPGA.

Need to cut to the chase? Check out the full solution implemented in SystemVerilog on GitHub. I’ve wrapped it inside an SPI communication layer so that we can play with it using an everyday microcontroller.

To understand how it works, join us as we embark on an adventure in designing algorithms for hardware. If you’re used to thinking of programming in a stepwise fashion for a CPU, it’s time to get out your thinking cap!

A Parallel Approach

Our goal is to deliver a module of digital logic that receives its data serially, or, once per clock cycle, and sorts that data. To achieve a linear-time benchmark, I’ve opted for sorting the data while it’s being clocked in. In that sense, once we’ve finished clocking in our data, our array should be sorted immediately afterwards, ready to be clocked out again.

If we can construct such a block, then our hardware solution is linear, or O(N). The problem might seem complicated, but the solution is actually rather simple — provided that we’re doing our sorting in parallel. To unpack how this algorithm works, let’s look at a simple case: sorting a three-element array.

Example Time: the Three-Sorter

Let’s start with a quick example of sorting a three-element array into a container of size three.

On the left we’ve got our unsorted list ready to be clocked in serially; on the right, our “Three-Sorter” sorting unit with a cell array of size three. (An e indicates that the cell is empty.) Inside the Three-Sorter, I’ll declare that elements in this cell array from top-to-bottom are stored in increasing size. Our goal is to put each new element in the right spot based on what’s currently inside the cell array. At this point, we don’t actually know any details about the signals that will make this sorting happen, but we’ll walk through the flow of data and build up that logic afterwards.

Sorting Step 1:

In Step 1, all cells are empty. We start by inserting a new element, but we need a place to put it. Since all cells are empty, this new element is the smallest we’ve seen so far, so let’s insert it at the top. Even though there’s just one element, I’ll make a bold claim that our cell array is sorted.

Sorting Step 2:

In Step 2, we try to insert the second element. Our first question might be: where does this element fit in the current container? Looking at all the cells, we can see that this second element 4 is greater than the elements in all other occupied cells. 4 fits into any of the empty slots after 3, but we’ll put it in the first empty slot after the last occupied slot. Hence, our new array is still sorted.

Sorting Step 3:

In Step 3, we’ll insert the third element, 3. Looking at our container of cells, we know that 3 fits nicely between 2 and 4, but there’s simply no empty space. To make room, we’ll kick 4 downwards into the next cell and put 3 in 4‘s old spot. (Sorry, 4, but it’s for science!) Now that we’ve inserted 3 — guess what — our internal cells are still sorted!

Taking a step back, every time we insert a new element, we look at each cell and ask ourselves: “Does my new data fit here?” Because we’ve agreed beforehand that the contents of our cell array will always be sorted, it turns out that every cell can answer this question independently without knowing the contents of the rest of the cells. All it needs to do is ask for a few bits of information from the cell above it.

Alas, if only these cells were “intelligent” enough to act on their own and ask questions about their data and their neighbor’s data. If they could, then I claim that all cells could independently agree on who gets to hold the new element as each element gets clocked in.

Signals for Parallel Sorting

Now that we’ve seen cell-sorting in action, let’s revisit this example while we take a deeper look at how these cells communicate. Without knowing the contents of the rest of the cells, all cells need to independently agree where the new element goes so that our new element gets stored once and only once. Let’s see this “cell-talk” in action.

Detailed Sorting Step 1:

In Step 1, we start by inserting 2. Here, all cells are empty, so any spot would be fair game. Since 2 is the smallest element we’ve seen so far, it needs to go in the top cell. The top cell also needs a way to tell all other cells that it just got dibs on the incoming 2. We don’t want to have a “cell-claimed” signal that propagates all the way down the chain; that’s just too slow. Rather, we’ll make a deal across all cells. If a cell is empty, it will only claim the incoming element if its above neighboring cell is occupied. Now, when new data arrives, each empty cell just needs to look above at the cell on top of it to see if that cell is occupied. If it is, that empty cell claims the new data. If it isn’t, then we’ll assume that some empty cell above it already has dibs. To bootstrap the first cell into always claiming the first element of an incoming array, we’ll hard wire it’s “above-cell-is-occupied” signal to True. (That’s the fake grey cell in the diagram above.)

This “deal” we just made needs to be so strict that we’ll call it “Rule Number 1.”

  • Rule No. 1: if a cell is empty, it will only claim the incoming element if the above cell is not empty.
Detailed Sorting Step 2:

In Step 2, a second element, 4, arrives to be sorted. Here, we know that 4 should go after 2, but how do our cells know? If a cell is occupied, it needs to ask the question: “is this incoming new number less than my current number?” If it is, then that cell will kick out its current number and replace it with the new number. That current number can get kicked down to the next cell below it. Hmm, if every occupied cell asks that same question, then there could be multiple cells that would be more than happy to replace their current number with the new number. If that were true, then we risk storing multiple copies of an input element. Well, we can’t have that, so we’ll make another rule.

  • Rule No. 2: If a cell is occupied, it will claim the incoming element if both the incoming element is less than the stored element AND the occupied cell above this cell is not kicking out its element.

Phew! Now only one copy of the incoming element will be stored in the entire cell array.

To make sure that we aren’t losing data when cells kick out their data, we’ll make another rule.

  • Rule No. 3: If the cell above the current cell kicks out it’s stored element, then the current cell MUST claim the above cell’s element, regardless of whether or not the current cell is empty or occupied.

If a cell is empty — phew — it’ll just follow “Rule Number 1” and “Rule Number 3.”

Detailed Sorting Step 3:

Now, let’s try to insert a 3 as in the last example. 3 should fit nicely in 4‘s spot, but how do our cells know? The top cell is holding a 2, and since two is less than three, the 2 stays put. The second cell, however, is holding a 4, and since three is less than four, 4 gets the boot and needs to go down a cell. The first empty cell could take the new data since it’s empty, but since 4 just got kicked out from the cell above, this empty cell gets the 4 instead. All other empty cells below follow “Rule Number 1”. Each of these cells is empty and each cell above these cells was also empty at the time, so they stay empty.

We’ll make one last rule for cells that claim new data:

  • Rule No. 4: If a cell is occupied and accepts a new element (either from the above cell or from the incoming data), it must kick out its current element.

With all our other rules, we can guarantee that any element that gets kicked out will be caught by the cell below.

If all of our cells follow these rules, then we can guarantee that, together, they can sort arrays without needing to know about the cell contents of all other cells, because it is implicit to the cell array structure. And with that said, each sorting step is fast, bubbling down to N identical and independent questions asked in parallel at every clock cycle.

Behold–a Sorting Cell

Now that we’ve ironed out the rules, let’s look at the anatomy of a single sorting cell, each of which makes up a unit in our cell array.

To follow the rules we just generated, each cell needs to know information about it’s upstream cell and pass along information about it’s current state and data. In a nutshell, these signals just become inputs and outputs to a collection of internal digital logic. To “follow the rules” of our architecture, we’ll need to resort to a small snippet of sequential and combinational logic.

First, each cell needs to keep track of its own state, empty or occupied. In SystemVerilog, we can implement this feature as a two-state state machine. To make determine whether or not we’ll claim the new data, previous data, or hold on to our current data, we’ll need to follow the rules, which I’ve wrapped up as one block of combinational logic.

First, we’ll need some intermediate signals:

assign new_data_fits = (new_data < cell_data) || (cell_state == EMPTY); assign cell_data_is_pushed = new_data_fits & (cell_state == OCCUPIED);

new_data_fits is simply our criteria for whether or not a cell can accommodate new data.

cell_data_is_pushed tells us whether or not the current cell data in an occupied cell is getting kicked out to the lower level.

Now, we can put together the true nuts and bolts of our sorting cell: the logic for determining what data gets stored.

if (reset) begin cell_data <= 'b0; // actual reset value is irrelevant since cell is empty end else if (enable) begin //{shift_up, new_data_fits, prev_cell_data_pushed, cell_state, prev_cell_state} casez (priority_vector) 'b0?1??: cell_data <= prev_cell_data; 'b0101?: cell_data <= new_data; 'b0?001: cell_data <= new_data; 'b1????: cell_data <= next_cell_data; default: cell_data <= cell_data; endcase end else begin cell_data <= cell_data; end

The block above is a Verilog case statement with “don’t cares” making some signals take priority over others. The first case is just “Rule Number 3” where a cell always takes the previous cell’s contents if that number is kicked out. The second case is for a cell that is currently occupied, but satisfies the “strictly-less-than” criteria. The third case is for storing the biggest element yet in the first empty cell, or “Rule Number 1” with “Rule Number 3.” The final case isn’t part of the algorithm, but it’s for shifting our data back up the chain when we want to retrieve our array of sorted elements.

Perfect! Now that our sorting cells are encoded to follow the rules, we simply create an array of these with a Verilog generate statement, and our linear-time sorter is complete! Now, give it a go by uploading the source code to real FPGA hardware.

Getting Edgy with Edge Cases

Before celebrating, it’s worth bringing up some edge cases. What happens if our unsorted array has repeats? Can we sort fewer elements than  the maximum capacity of our hardware sorter? Without transforming this post into a novel, I’ll reassure you that this solution does indeed handle repeats and also sorts arrays from size 0 up to N, where N is the total number of sorting cells that we’ve synthesized onto our FPGA. It can’t however, swap back and forth between sorting arrays of signed or unsigned data types without tweaking a few parameters in the source code first. All that said, I encourage you: be skeptical! Don’t believe me until you worked these edge cases out for yourself on paper or in simulation.

Trading Time for Space

We did it! We’ve built our very own sorting peripheral in hardware that chews through unsorted arrays and sorts them in linear time! But, as with all too-good-to-be-true scenarios, we’ve paid a hefty price in FPGA resources. While we may have an ultra-fast sorter built in hardware, I can only instantiate sorting hardware to sort 8-bit arrays up to ~1000 elements long before I’ve exhausted my FPGA’s resources. We’ve traded time for FPGA macrocells. Luckily, the amount of hardware grows linearly as we increase the number of sorting cells to sort larger arrays.

Putting on your “Algorithms-in-Hardware” Hat

I snagged the image above from Atera’s RTL Viewer after synthesizing a Ten-Sorter, and something is strange. As I grow the size of the sorting hardware or change the data type from 8-bit to 16-bit to 32-bit integer by tweaking some top-level parameters, I lose the ability to see all of the connections and signals passing back and forth in my head. It’s just too complicated. There are too many wires.

Nevertheless, the design itself bubbles down to a very simple unit, the sorting cell, that repeats itself many times over with each repeat performing the exact same task. Because we know the behavior of each of these sorting cells, we can predict the behavior of an entire system made up from these blocks — even if we can’t visualize all of the connections in our heads! This type of procedurally generated complexity strikes me as very beautiful, the same kind of beauty that we’d find in fractals or recursive algorithms. If you’ve made it this far, thanks for joining me, and keep us all posted on your adventures in the world of “algorithms in hardware.”


Filed under: Featured, FPGA

Saving Old Voices by Dumping ROMs

พุธ, 01/20/2016 - 19:01

Some people collect stamps. Others collect porcelain miniatures. [David Viens] collects voice synthesizers and their ROMs. In this video, he just got his hands on the ultra-rare Electronic Voice Alert (EVA) from early 1980s Chrysler automobiles (video embedded below the break).

Back in the 1980s, speech synthesis was in its golden years following the development of TI’s linear-predictive coding speech chips. These are the bits of silicon that gave voice to the Speak and Spell, numerous video game machines, and the TI 99/4A computer’s speech module. And, apparently, some models of Chrysler cars.

We tracked [David]’s website down. He posted a brief entry describing his emulation and ROM-dumping setup. He says he used it for testing out his (software) TMS5200 speech-synthesizer emulation.

The board appears to have a socket for a TMS-series voice synthesizer chip and another slot for the ROM. It looks like an FTDI 2232 USB-serial converter is being used in bit-bang mode with some custom code driving everything, and presumably sniffing data in the middle. We’d love to see a bunch more detail.

The best part of the video, aside from the ROM-dumping goodness, comes at the end when [David] tosses the ROM’s contents into his own chipspeech emulator and starts playing “your engine oil pressure is critical” up and down the keyboard. Fantastic.

Thanks [William] for the tip!


Filed under: classic hacks, musical hacks

Robots and Crickets

พุธ, 01/20/2016 - 16:01

If you watch science fiction movies, the robots of the future look like us. The truth is, though, many tasks go better when robots don’t look like us. Sometimes they are unique to a particular job or sometimes it is useful to draw inspiration from something other than a human being. One professor at Johns Hopkins along with some students decided to look at spider crickets as an inspiration for a new breed of jumping robots.

Over eight months, the students studied the kinematics of how the crickets could jump up to 60 times their body length and land on their feet. Granted, 60 times their body length is only about 2.5 feet, but if they were human-sized that would equate to jumping across an entire US football field.

By using high-speed video cameras (up to 400 frames per second) to determine how the wingless crickets manage these huge jumps. They apparently use their limbs and antennae to stabilize themselves during flight. They also streamline their bodies to maximize their jump distance. The researchers have created computerized models that replicate the cricket’s jumping motions. They hope to use this knowledge and these models to design high-jumping robots to travel over rough terrain without the expense of building a fully flying robot.

We’ve seen some insect-inspired robots, of course. We even recall a Russian robotic cockroach. While we’ve seen a jumping robot called a sandflea, we didn’t think the way it jumped mimicked its namesake. You can see the crickets leap in the video below. Then find some shape memory alloy, start a Hackaday.io project and start building.


Filed under: robots hacks

A Digital Canvas That’s Hard to Spot

พุธ, 01/20/2016 - 13:00

While sorely lacking in pictures of the innards of this digital canvas, we were extremely impressed with the work that went into making such a convincing object. [Clay Bavor] wanted a digital picture frame, but couldn’t find one on the market that did what he wanted. They all had similar problems, the LCDs were the lowest quality, they were in cheap bezels, they had weird features, they had no viewing angle, and they either glowed like the sun or were invisible in dark environments.

[Clay] started with the LCD quality, he looked at LCD specs for the absolute best display, and then, presumably, realized he lived in a world where money is no object and bought a 27″ iMac. The iMac has a very high pixel density, no viewing angle, and Apple goes through the trouble of color balancing every display. Next he got a real frame for the iMac, cut a hole in the wall to accommodate it, and also had a mat installed to crop the display to a more convincing aspect ratio for art. One of the most interesting part of the build is the addition of a Phidgets light sensor. Using this, he has some software running that constantly adjusts the Mac to run at a brightness that’s nearly imperceptible in the room’s lighting.

Once he had it built he started to play around with the software he wrote for the frame. Since he wanted the frame to look like a real art print he couldn’t have the image change while people were looking, so he used the camera on the Mac and face detection to make sure the image only changed when no one was looking for a few minutes. He also has a mode that trolls the user by changing the image as soon as they look away.

We admit that a hackier version of this would be tearing the panel out of a broken iMac and using a lighter weight computer to run all the display stuff. [Clay] reached the same conclusion and plans to do something similar for his version 2.0.

[via Hacker News]


Filed under: macs hacks

Solderless Breadboard Parasitics

พุธ, 01/20/2016 - 10:00

Solderless breadboards are extremely handy. You always hear, of course, that you need to be careful with them at high frequencies and that they can add unwanted capacitance and crosstalk to a circuit. That stands to reason since you have relatively long pieces of metal spaced close together — the very definition of a capacitor.

[Ryan Jensen] did more than just listen to that advice. He built a circuit and used a scope to investigate just how much coupling he could expect with a simple digital circuit. Better still, he also made a video of it (see below). The test setup shows a single gate of a hex Schmitt trigger inverter with a sine wave input. The output transitions ring and also couple back into the input.

Of course, the circuit is simple. Some additional decoupling might have helped some. Still, [Ryan’s] circuit isn’t atypical of something you’d see on a breadboard, so his points are still valid. Another thing that can help is mounting the breadboard on a solid ground plane.

Towards the end of the video, [Ryan] guts a breadboard to show what’s inside. We’ve seen people do surgery on the breadboard internals before, but if you haven’t seen one eviscerated before, you might find it interesting.


Filed under: tool hacks

Shmoocon 2016: Efficient Debugging For OS X

พุธ, 01/20/2016 - 07:00

Developers love their macs, and if you look at the software that comes with it, it’s easy to see why. OS X is a very capable Unix-ey environment that usually comes on very capable hardware. There is one, huge, unbelievable shortcoming in OS X: the debugger sucks. GDB, the standard for every other platform, doesn’t come with OS X and Apple’s replacement, LLDB is very bad. After crashing Safari one too many times, [Brandon Edwards] and [Tyler Bohan] decided they needed their own debugger, so they built one, and presented their work at last weekend’s Shmoocon.

Building a proper tool starts with a survey of existing tools, and after determining that GDB was apparently uninstallable and LLDB sucked, their lit review took a turn for the more esoteric. Bit Slicer is what they landed on. It’s a ‘game trainer’ or something that allows people to modify memory. It sort of works like a debugger, but not really. VDB was another option, but again this was rough around the edges and didn’t really work.

The problems with the current OS X debuggers is that the tools used by debuggers don’t really exist. ptrace is neutered, and the system integrity protection in OS X El Capitan has introduced protected locations that can not be written to by root. Good luck modifying anything in /Applications if you have any recent Mac.

With the goal of an easy-to-use debugger that was readily scriptable, [Brandon] and [Tyler] decided to write their own debugger. They ended up writing the only debugger they’ve seen that is built around kqueue instead of ptrace. This allows the debugger to be non-invasive to the debugged process, inject code, and attach to multiple processes at once.

For anyone who has every stared blankly at the ‘where is GDB’ Stack Overflow answers, it’s a big deal. [Brandon] and [Tyler] have the beginnings of a very nice tool for a very nice machine.


Filed under: cons, software hacks

Hackaday at SCaLE 14x

พุธ, 01/20/2016 - 05:31

Next weekend we’ll be at the fourteenth annual Southern California Linux Expo, a fantastic four-day event that covers everything from Apache to PHP, installing Ubuntu on old laptops, people who have their control key just to the right of their left hand pinky as god intended, and something about how much Linux sucks.

The event will feature 150 exhibitors, 130 sessions, tutorials, amateur radio tests, and features keynotes from Mark Shuttleworth, Cory Doctorow, and Sarah Sharp. It is the largest community-run open source and free software conference in North America.

The Hackaday crew will be there makin’ it rain stickers, but that’s not all: Supplyframe, the Hackaday overlords, is sponsoring Game Night at SCaLE. Saturday night will be filled with vintage video games, Nerf artillery, Settlers of Catan, Fireball Island (if someone can find it), and a hacker show and tell. This year is the inaugural SCaLE museum. The theme is Rise of the Machines: A Living Timeline, and will display historic engineering, computing devices, and clever gadgets.

If you’re in the area on Thursday, We’ll also be having a meet and greet at the soon-to-be-finished Supplyframe Design Lab in Pasadena. We only recently got the paperwork to have people in the space, so if you’d like to have a few drinks, have a few snacks, and look at a Tormach, come on over.


Filed under: cons, Hackaday Columns

Basically, Its Minecraft

พุธ, 01/20/2016 - 04:00

[SethBling] really likes Minecraft. How can you tell? A quick look at his YouTube channel should convince you, especially the one where he built a full-blown BASIC interpreter in Minecraft. It is not going to win any speed races, as you might expect, but it does work.

For novelty and wow factor, this is amazing. As a practical matter, it is hard to imagine the real value since there are plenty of ways a new programmer could get access to BASIC. Still, you have to admire the sheer audacity of making the attempt. One Hackaday poster (who shall remain nameless) once won a case of beer by betting someone he or she could write a BASIC compiler in BASIC, so we aren’t sticklers for practicality.

Because the interpreter is so slow, we had wished the prime number algorithm in the demo video (see below) had been just a little more efficient. First, you don’t have to check numbers greater than the square root of the target number, since any factor in that range will reveal another factor that is greater than the square root. If the BASIC interpreter didn’t have a square root function, you could, at least, stop at half the target number. Of course, you could also keep a list of primes and skip testing any non-prime factor. After all, if (for example) 15 evenly divides your number, so will 5 and 3, right? But that’s algorithm design and takes nothing away from the BASIC.

We’ve seen a 6502 running Forth in Minecraft, so it’s good to see BASIC getting its day in the sun as well. We still see pockets of BASIC interest, like the fairly new interpreter that turns an ESP8266 into an ersatz BASIC Stamp.


Filed under: computer hacks

Developed on Hackaday : HaDge update – it’s a HACK

พุธ, 01/20/2016 - 02:31

Work on HaDge – the Hackaday con badge, continues in bits and spurts, and we’ve had some good progress in recent weeks. HaDge will be one conference badge to use at all conferences, capable of communicating between badges.

Picking up from where we left off last time, we had agreed to base it around the Atmel D21, a 32-bit ARM Cortex M0+ processor. To get some prototype boards built to help with software development, we decided to finish designing the HACK before tackling HaDge. HACK is a project that [Michele Perla] started that we have sort of assimilated to act as the prototyping platform for HaDge. We wanted a compact micro-controller board and hence opted for the SAM D21E – a 32 pin package with 26 IO’s.

[Michele Perla] had earlier designed HACK based on the larger 32 pin SAM D21G and used Eagle to draw the schematic and layout. Using the Eagle to KiCad script, he quickly converted the project and got on to making the board layout. I took up the rear guard, and worked on making his schematic (pdf) “pretty” and building up a schematic library of symbols. While [Michele] finished off the board layout, I worked on collecting STEP models for the various footprints we would be using, most of which I could get via 3dcontentcentral.com. The few I couldn’t were built from scratch using FreeCAD. The STEP models were converted to VRML using FreeCAD. Using [Maurice]’s KiCad Stepup script, we were able to obtain a complete STEP model of the HACK board.

HACK is now ready to go for board fabrication and assembly. We plan to get about 20 boards made and hand them out to developers for working on the software. The GitHub repository has all the current files for those who’d like to take a look – it includes the KiCad source files, PDFs, gerbers, data sheets and images. The board will be breadboard compatible and also have castellated pads to allow it to be soldered directly as a module. Let us know via group messaging on the HACK project page if you’d like to get involved with either the software or hardware development of HaDge.

In a forthcoming post, we’ll put out ideas on how we plan to take forward HaDge now that HACK is complete. Stay tuned.


Filed under: Hackaday Columns, hardware

J.C. Bose and the Invention of Radio

พุธ, 01/20/2016 - 01:01

The early days of electricity appear to have been a cutthroat time. While academics were busy uncovering the mysteries of electromagnetism, bands of entrepreneurs were waiting to pounce on the pure science and engineer solutions to problems that didn’t even exist yet, but could no doubt turn into profitable ventures. We’ve all heard of the epic battles between Edison and Tesla and Westinghouse, and even with the benefit of more than a century of hindsight it’s hard to tell who did what to whom. But another conflict was brewing at the turn of 19th century, this time between an Indian polymath and an Italian nobleman, and it would determine who got credit for laying the foundations for the key technology of the 20th century – radio.

Appointment and Disappointment Jagadish Chandra Bose

In 1885, a 27-year old Jagadish Chandra Bose returned to his native India from England, where he had been studying natural science at Cambridge. Originally sent there to study medicine, Bose had withdrawn due to ill-health exacerbated by the disagreeable aroma of the dissection rooms. Instead, Bose returned with a collection of degrees in multiple disciplines and a letter of introduction that prompted the Viceroy of India to request an appointment for him at Presidency College in Kolkata (Calcutta). One did not refuse a viceroy’s request, and despite protests by the college administration, Bose was appointed professor of physics.

Sadly, the administration found ways to even the score, chiefly by not providing Bose with any laboratory space, but also by offering him only 100 rupees a month salary, half of what an Indian professor would normally make, and only a third of an Englishman’s salary. Bose protested the latter by refusing salary checks – after three years his protest worked and he got his full salary retroactively – and worked around the former by converting a tiny cubicle next to a restroom into a lab. But in those 24 square feet, equipped with instruments of his own design and paid for at his expense, Bose would work wonders and begin to engineer the embryonic field of radio.

At around the time Bose joined Presidency College, Heinrich Hertz was confirming the existence of electromagnetic waves, postulated by James Clerk Maxwell in the 1860s. Maxwell died before he could demonstrate that electricity, magnetism, and light are all one in the same phenomenon, but Hertz and his spark gap transmitters and receivers proved it. Inspired by this work and intrigued by the idea that “Hertzian Waves” and visible light were the same thing, Bose set about exploring this new field.

Bose’s microwave apparatus. Transmitter on right, galena detector in the horn on the left of the experiment stage. By Biswarup Ganguly

By 1895, barely a year after starting his research, Bose made the first public demonstration of radio waves in the Kolkata town hall. Details of the apparatus used are vague, but at a distance of 75 feet, he remotely rang an electric bell and ignited a small charge of gunpowder. The invited guests were amazed by the demonstration that Adrisya Alok, or “Invisible Light” as Bose would summarize it in a later essay, could pass through walls, doors, and in a particularly daring feat of showmanship, through the body of the Lieutenant Governor of Bengal.

Bose’s wireless demonstration was remarkable for a couple of reasons. First, it took place two years before Marconi’s first public demonstrations of wireless telegraphy in England. Where Marconi was keenly interested in commercializing radio, Bose’s interest was purely academic; in fact, Bose flatly refused to patent nearly all of the inventions that would spring from his tiny workshop, on the principle that ideas should be shared freely.

The 1895 demonstration also used microwave signals instead of the low and medium frequency waves that Marconi and others were working with. Bose recognized early on that shorter wavelengths would make it easier to explore the properties of radio waves that were similar to light, like reflection, refraction, and polarization. To do so, he invented almost all the basic components of microwave systems – waveguides, polarizers, horn antennas, dielectric lenses, parabolic reflectors, and attenuators. His spark-gap transmitters were capable of 60GHz operation.

Coherent Thoughts Marconi Admiralty Pattern Coherer. Source: The Science Museum (UK)

Some of Bose’s most important work in radio concerned detection of electromagnetic waves. Early wireless pioneers had discovered that electromagnetic waves could be rectified by fine metal particles contained in a tube between metal conductors; the electrical energy would cause the particles to clump together and become conductive. The device was called a coherer because of the clumping action and was used as rectifiers in all the early practical wireless receivers, despite its operation being not well-understood. Experiments with coherers continue to this day.

Early coherers had a problem, though – the filings stayed stuck together after the signal had passed. The device needed to be reset by a tiny electromagnetic tapping mechanism that jiggled the filings back into a non-conductive state before the next signal could be detected. This had obvious effects on bandwidth, so the search for better detectors was on. One improvement invented by Bose in 1899 was the iron-mercury-iron coherer, with a pool of mercury in a small metal cup. A film of insulating oil covered the mercury, and an iron disc penetrated the oil but did not make contact with the liquid mercury. RF energy would break down the insulating oil and conduct, with the advantage of not needing a decoherer to reset the system.

Bose’s improved coherer design would miraculously appear in Marconi’s transatlantic wireless receiver two years later. The circumstances are somewhat shady – Marconi’s story about how he came up with the design varied over time, and there were reports that Bose’s circuit designs were stolen from a London hotel room while he was presenting his work. In any case, Bose was not interested in commercializing his invention, which Marconi would go on to patent himself.

The Father of Semiconductors? Early Bose galena point-contact detectors. Source: National Radio Astronomy Observatory

Bose also did early work in semiconductor detectors. Bose was exploring the optical properties of radio waves when he discovered that galena, an ore of lead rich in lead sulfide, was able to selectively conduct in the presence of radio waves. He was able to demonstrate that point contacts on galena crystals worked as a better coherer, and in an uncharacteristic move actually patented the invention. Interestingly, the patent includes descriptions of substances that show either decreased or increased resistance to current flow with increasing voltage; Bose chose to describe these a “positive” and “negative” substances, an early example of the “P-N” nomenclature that would become common in semiconductor research. Decades later, William Brattain, co-inventor of the transistor, would acknowledge that Bose had beat everyone to the punch on semiconductors and would credit him with inventing the first semiconductor rectifier.

Inventions and innovations would flow from Bose’s fertile mind for many decades. He eventually turned his attention to plant physiology, studying the stress responses of plants with a sensitive device he invented, the crescograph, which could amplify the movements of the tips of plants by a factor of 10,000. Not surprisingly, he also did important work on the effects of microwaves on plant tissues. Bose also did work comparing metal fatigue and fatigue in physically stressed plant tissues. Bose is also considered the father of Bengali science fiction.

Bose is rarely remembered as a pioneer in radio, despite all he accomplished in engineering the wireless system that would eventually stitch together the world. Given his position on patents, that’s not surprising – his inventions were his gift to the world, and he seemed content with letting others capitalize on his genius.


Filed under: Featured, wireless hacks

Shmoocon 2016: Reverse Engineering Cheap Chinese Radio Firmware

อังคาร, 01/19/2016 - 23:31

Every once in a great while, a piece of radio gear catches the attention of a prolific hardware guru and is reverse engineered. A few years ago, it was the RTL-SDR, and since then, software defined radios became the next big thing. Last weekend at Shmoocon, [Travis Goodspeed] presented his reverse engineering of the Tytera MD380 digital handheld radio. The hack has since been published in PoC||GTFO 0x10 (56MB PDF, mirrored) with all the gory details that turn a $140 radio into the first hardware scanner for digital mobile radio.

The Tytera MD-380 digital radio

The Tytera MD380 is a fairly basic radio with two main chips: an STM32F405 with a megabyte of Flash and 192k of RAM, and an HR C5000 baseband. The STM32 has both JTAG and a ROM bootloader, but both of these are protected by the Readout Device Protection (RDP). Getting around the RDP is the very definition of a jailbreak, and thanks to a few forgetful or lazy Chinese engineers, it is most certainly possible.

The STM32 in the radio implements a USB Device Firmware Upgrade (DFU), probably because of some example code from ST. Dumping the memory from the standard DFU protocol just repeated the same binary string, but with a little bit of coaxing and investigating the terrible Windows-only official client application, [Travis] was able to find non-standard DFU commands, write a custom DFU client, and read and write the ‘codeplug’, an SPI Flash chip that stores radio settings, frequencies, and talk groups.

Further efforts to dump all the firmware on the radio were a success, and with that began the actual reverse engineering of the radio. It runs an ARM port of MicroC/OS-II, a real-time embedded operating system. This OS is very well documented, with slightly more effort new functions and patches can be written.

In Digital Mobile Radio, audio is sent through either a public talk group or a private contact. The radio is usually set to only one talk group, and so it’s not really possible to listen in on other talk groups without changing settings. A patch for promiscuous mode – a mode that puts all talk groups through the speaker – is just setting one JNE in the firmware to a NOP.

The Tytera MD-380 ships with a terrible Windows app used for programming the radio

With the help of [DD4CR] and [W7PCH], the entire radio has been reverse engineered with rewritten firmware that works with the official tools, the first attempts of scratch-built firmware built around FreeRTOS, and the beginnings of a very active development community for a $140 radio. [Travis] is looking for people who can add support for P25, D-Star, System Fusion, a proper scanner, or the ability to send and receive DMR frames over USB. All these things are possible, making this one of the most exciting radio hacks in recent memory.

Before [Travis] presented this hack at the Shmoocon fire talks, intuition guided me to look up this radio on Amazon. It was $140 with Prime, and the top vendor had 18 in stock. Immediately after the talk – 20 minutes later – the same vendor had 14 in stock. [Travis] sold four radios to members of the audience, and there weren’t that many people in attendance. Two hours later, the same vendor had four in stock. If you’re looking for the best hardware hack of the con, this is the one.


Filed under: cons, radio hacks, slider

Why No Plane Parachutes? And Other Questions.

อังคาร, 01/19/2016 - 22:01

This week I was approached with a question. Why don’t passenger aircraft have emergency parachutes? Whole plane emergency parachutes are available for light aircraft, and have been used to great effect in many light aircraft engine failures and accidents.

But the truth is that while parachutes may be effective for light aircraft, they don’t scale. There are a series of great answers on Quora which run the numbers of the size a parachute would need to be for a full size passenger jet. I recommend reading the full thread, but suffice it to say a ballpark estimate would require a million square feet (92903 square meters) of material. This clearly isn’t very feasible, and the added weight and complexity would no doubt bring its own risks.

Accidents/No. of flights by year. Data compiled from 1 and 2.

There’s a deeper issue hiding in the questions though. A question of flight safety, and perhaps our inherent fear of flight. It’s easy to worry about the safety of passenger aircraft, particularly in light of the spate of high profile accidents in the last couple of years. However, the truth is that air travel is not only very safe, it’s getting safer every year.

The figure to the right is compiled from a couple of publicly available sources. It shows the number of airplane accidents per year divided by the number of flights. Since the 1970s accident rates have consistently dropped. There’s an excellent write-up covering this by an ex-Boeing employee which I also urge you to read.

One aspect of air flight that breeds fear is the lack of information that often accompanies accidents. The unknown fate of missing aircraft allows the media to feed on speculation, and with it our natural fear of the unknown. Our inability to locate aircraft often seems confusing, in a world we feel like significant effort is required to avoid having our locations tracked by the NSA every second of every day, why can we not locate something as large as a passenger jet?

The fact is that aircraft are constantly monitored when possible, and that the information is widely available! Aircraft transmit their GPS coordinates over ADSB. The popular flight traffic monitoring site FlightAware 24 uses this as one of its data sources. It’s also pretty easy to acquire and decode ADSB signals yourself using an RTL SDR dongle.

However ADSB is used by aircraft to communicate with ground stations. It therefore doesn’t work over oceans. A solution to this would be to use satellite uplinks but that’s expensive, and some say of limited utility. Other suggestions are to create a kind of mesh network between aircraft as they travel over oceans. No doubt such tracking solutions will become more common as user-demand for in-flight WiFi continues to grow, and Twitter becomes cluttered with users tweeting pictures of their in-flight meals.

Whatever the root cause of our fears. Air travel is very very safe. But another recent “What-if”, prompted me to consider what air travel might be like if safety was not our paramount concern. The recent and controversial amazon TV series “The Man in the High Castle” shows a world where the allied power lost World War Two and the Americas are ruled by a coalition of the Japanese and Nazis. There’s one point when a supersonic flight (the featured image above) lands in San Francisco, having taken only two hours to arrive from Europe. In a world, perhaps more willing to take risks with human life, and more willing to compromise on the wishes of its citizens. Would supersonic flight still be commonplace?

Commonplace Supersonic Travel

We of course, used to have supersonic passenger aircraft. Until the year 2000, Concorde was considered one of the safest aircraft in the world. It’s one, and only crash in that year, the slump in air traffic following 9/11 and the fact that supersonic commercial aircraft were banned over land all conspired to make Concorde un-economical. Commercial flights ceased in 2003.

A hobbyist rocket, capable of supersonic speeds.

While supersonic flight may currently be impractical for commercial flight, hobbyists have been trying to get in on the supersonic action. The fastest RC aircraft have not yet quite reach supersonic speeds, but supersonic rockets have been built. This instructable describes the process of modifying a $70 rocket (with $360 of components) to achieve supersonic speeds. On a test flight [gizmologist] achieved a speed of 801mph (Mach 1.07).

There was no audible sonic boom from [gizmologists] rocket, by the time the rocket reaches supersonic speeds it’s already 450 meters up, and the sound waves generated radiate out sideways.

Supersonic aircraft however, do produce a sonic boom. And it was this invasive sound, that caused commercial supersonic flight to be banned over land, helping to seal Concorde’s fate.

Sonic booms are caused by the same mechanism as the Doppler effect. In the Doppler effect an object moving toward you appears to produce a higher frequency sound. Because the source of the sound waves is moving toward you it “catches up” with the wave front effectively compressing the waves and generating a higher frequency in the direction of motion.

In a sonic boom the wave fronts are being pushed so close together that they catch up with each other. Multiple wave fronts therefore lie on top of each other. All that sound is compressed together, and reaches your ear in one big bang.

NASA however have been working on plans to “fix” the sonic boom issue in super-sonic aircraft. They’ve invested $2.3 million in research projects to predict, and reduce the sonic boom effect.

Who Killed the Electric Plane?

Supersonic aircraft were an impressive technological leap, but existing designs are still powered by fossil fuels. In a world where consumer vehicles are beginning to transition to all electrical systems this feels a little old fashioned. Building small electric planes is possible, but has been held back in the US by FAA rulings, though a few are available.

Existing electric aircraft are all pretty conventional using electric motors (generally of the brushless DC variety) to generate motion. But spend much time on YouTube though and you’re likely to come across a very different type of “craft” with many videos claiming to have created anti-gravity UFO. These stem from Thomas Townsend Brown. In the 1960s he created devices which he believed were using electric fields to modify gravity. Unfortunately this was not the case.

What he had actually built was an Ionocraft.

The basic propulsion mechanism is quite simple. Put a pointed electrode near a smooth one then throw a few thousand volts across them, this simple setup will then generate thrust. It accomplishes this by creating an electric field focused on the tip and spreading out to the smooth surface. Where the field is strong electrons are pulled off atoms in the air, ionizing it. These positively charged atoms fly toward the negative electrode. This in itself does not generation thrust, but as the ions move they hit other uncharged atoms in the air, creating what is known as “ionic wind”.

A related technique has successfully used by NASA and JAXA in their space probes. Because there’s no air in space to ionize however, they need to take their own gases to ionize with them. While the ion thrusters produce very little force, they are extremely efficient, which is of paramount importance in space travel.

However aside from the odd YouTube video they’ve found limited utility here on earth. It’s possible that this could change. While Ion thrusters generally produces very little force, recent studies have shown that they may be an order of magnitude more efficient than jet engines. There are some pretty significant challenges to solve, like the huge voltages (10s of kilovolts are used even in a small lifter) required to generate the required lift, or the large physical size of the thrusters. But it wouldn’t be entertaining if, the future of both space and terrestrial flight rested in what was once considered the work of an anti-gravity crank?

Whatever happens in the future, lets hope that as planes become faster and more efficient as their unremitting march of every increasing safety continues.


Filed under: classic hacks, Featured, transportation hacks

Hacking Education – A Makerspace Experiment

อังคาร, 01/19/2016 - 19:01

This is an Education hack, and it’s pretty awesome. [Abhijit Sinha] received an Engineering degree and took up a run-of-the mill IT job in Bangalore, considered India’s IT hub. 7 months down the line on Dec 31st, he gave notice to the company and quit his “boring” job. He ended up in Banjarpalya, a village just 30 kms out of Bangalore. But it could well have been 30 years back in time. The people there had never come across computers, and there wasn’t much sign of other modern technology. So he set up Project DEFY – Design Education for You.

He bought a few refurbished laptops, took a room, and put kids and computers together. Except, these kids just knew a smattering of English. They went to the village school, run by the government and staffed by teachers whose training was basic, at best. He told the kids there are games in those boxes for them to play, but they’d have to figure it out on their own, without help from him. Pretty soon, all of them were playing games like they were pros. That’s when [Abhijit] stepped in and told them that they’d created a base line for having fun. Everything else they did from now on had to be more fun than what they had just done. If they were interested, he would show them how.

He had a gaggle of kids waiting to hear him with rapt attention. He showed them how to look online for information. He showed them how they could learn how to build fun projects by looking up websites like Instructables, and then use locally available materials and their own ingenuity to build and modify. Once a project was done, he showed them how to post details about what they had done and learnt so others around the world could learn from them. The kids took to all this like fish to water. They couldn’t wait to get through 5 hours of school each day, and then head over to their makerspace to spend hours tinkering. Check out their Instructable channel – and see if you can give them some guidance and advice.

A year onwards, on Dec 31st again, [Abhijit] gathered the kids, and several adults who had joined in during the year, telling them he had news. He had figured they were independent enough to run the space on their own now, without any help from him. He would still get them the 500 odd Dollars they needed each month to keep it operational. Other than that, they were on their own. He’s been monitoring their progress, and from the looks of it, the hack seems to have worked. More power to [Abhijit] and others like him around the world who are trying to bring the spirit of making to those who probably stand to benefit from it the most. Check out the videos below where they show off their work.

PS : Here’s the latest update from [Abhijit] : “Got back to the Banjarapalya Makerspace after quite a while, and this is what they show me – they built a little plane. Of course it crash lands, and needs a better programming, but I am super impressed that they are ready to fly.
Anyone who wants to help them technically? Financially? With parts and components ?”


Filed under: Hackerspaces

Cyborg Photosynthetic Bacteria!

อังคาร, 01/19/2016 - 16:01

This is weird science. Researchers at Lawrence Berkeley National Laboratory have taken some normal bacteria and made them photosynthetic by adding cadmium sulfide nanoparticles. Cadmium sulfide is what makes the garden-variety photoresistor work. That’s strange enough. But the bacteria did the heavy lifting — they coated themselves in the inorganic cadmium — which means that they can continue to grow and reproduce without much further intervention.

Bacteria are used as workhorses in a lot of chemical reactions these days, and everybody’s trying to teach them new tricks. But fooling them into taking on inorganic light absorbing materials and becoming photosynthetic is pretty cool. As far as we understand, the researchers found a chemical pathway into which the electrons produced by the CdS would fit, and the bacteria took care of the rest. They still make acetic acid, which is their normal behavior, but now they produce much more when exposed to light.

If you want to dig a little deeper, the paper just came out in Science magazine, but it’s behind a paywall. But with a little searching, one can often come up with the full version for free. (PDF).

Or if you’d rather make electricity, instead of acetic acid, from your bacteria be our guest. In place of CdS, however, you’ll need a fish. Biology is weird.

Headline images credit: Peidong Yang


Filed under: chemistry hacks, news

Naviator Drone Uses its Propellers to Fly and Swim

อังคาร, 01/19/2016 - 13:00

Rutgers University just put out a video on a “drone” that can fly and then drop into a body of water, using its propellers to move around. This isn’t the first time we’ve covered a university making sure Skynet can find us even in the bathtub, but this one is a little more manageable for the home experimenter. The robot uses a Y8 motor combination. Each motor pair on its four arms spin in opposite directions, but provide thrust in the same direction. Usually this provides a bit more stability and a lot more redundancy in a drone. In this case we think it helps the robot leave the water and offers a bit more thrust underwater when the props become dramatically less efficient.

We’re excited to see where this direction goes. We can already picture the new and interesting ways one can lose a drone and GoPro forever using this, even with the integral in your toolbox. We’d also like to see if the drone-building community can figure out the new dynamics for this drone and release a library for the less mathematically inclined to play with. Video after the break.

Thanks [Keith O] for the tip!


Filed under: drone hacks

Shmoocon 2016: Hackers for Charity

อังคาร, 01/19/2016 - 10:00

To one side of the “Chill Room” at this year’s Shmoocon were a few tables for Hackers for Charity. This is an initiative to make skills-training available for people in Uganda. The organization is completely supported by the hacker community.

Hackers for Charity was founded by Johnny Long about seven years ago. He had been working as a penetration tester but you perhaps know him better from his many books on hacking. Having seen the lack of opportunity in some parts of the world, Johnny started Hackers for Charity as a way to get used electronics and office equipment into the hands of people who needed it most. This led to the foundation of a school in Uganda that teaches technology skills. This can be life-changing for the students who go on to further schooling, or often find clerical or law enforcement positions. Through the charity’s donations the training center is able to make tuition free for about 75% of the student body.

The education is more than just learning to use a word processor. The group has adopted a wide range of equipment and digital resources to make this an education you’d want for your own children. Think Chromebooks, Raspberry Pi, robotics, and fabrication. One really interesting aspect is the use of RACHEL, which is an effort to distribute free off-line educational content. This is a searchable repository of information that doesn’t require an Internet connection. Johnny told me that it doesn’t stop at the schoolroom door; they have the system on WiFi so that anyone in the village can connect and use the resources whether they’re students or not.

Shmoocon does something interesting with their T-shirt sales. They’re not actually selling shirts at all. They’re soliciting $15 donations. You donate, and you get a shirt and a chit — drop you chit in a box to decide where your $15 should go. This year, Hackers for Charity, the EFF, and World Bicycle Relief were the charities to choose from. If you want to help out this 501c3 organization, consider clicking the donate button you’ll find on the sidebar and footer of their webpage.


Filed under: cons

Mouse Pen from Old Parts

อังคาร, 01/19/2016 - 07:01

No offense to [Douglas Engelbart] but the computer mouse has always seemed a bit of a hack to us (and not in the good sense of the word). Sure we’ve all gotten used to them, but unlike a computer keyboard, there is no pre-computer analog to a mouse. There are plenty of alternatives, of course, like touchpads and trackballs, but they never seem to catch on to the extent that the plain old mouse has.

One interesting variation is the pen mouse. These do rely on a pre-computer analog: a pen or pencil. You can buy them already made (and they are surprisingly inexpensive), but what fun is that? [MikB] wanted one and decided to build it instead of buying it.

The main parts of the pen mouse include a cheap mouse with a failing scroll wheel, a bingo pen, and the base from an old web camera. There’s also a normal-sized pen to act as the handpiece. The project is mostly mechanical rather than electrical. [MikB] took the mouse apart and cut the PCB to fit inside the base. The rest of the build is a construction project.

The result appears to work well. [MikB] includes instructions for installing the mouse correctly in Linux. The net effect is like a tablet but doesn’t’ require much space on your desk. We’ve seen plenty of mouse projects in the past, of course. We’ve even seen hacks for a head mouse if that’s your thing.


Filed under: misc hacks, peripherals hacks

Up Your Tiny House Game with Stone Age Hacks

อังคาร, 01/19/2016 - 04:00

Bare feet, bare hands, and bare chest – if it weren’t for the cargo shorts and the brief sound of a plane overhead, we’d swear the video below was footage that slipped through a time warp. No Arduinos, no CNC or 3D anything, but if you doubt that our Stone Age ancestors were hackers, watch what [PrimitiveTechnology] goes through while building a tile-roofed hut with no modern tools.

The first thing we’ll point out is that [PrimitiveTechnology] is not attempting to be (pre-)historically accurate. He borrows technology from different epochs in human history for his build – tiled roofs didn’t show up until about 5,000 years ago, by which time his stone celt axe would have been obsolete. But the point of the primitive technology hobby is to build something without using any modern technology. If you need a fire, you use a fire bow; if you need an axe, shape a rock. And his 102 day build log details every step of the way. It’s fascinating to watch logs, mud, saplings, rocks and clay come together into a surprisingly cozy structure. Especially awesome if a bit anachronistic is the underfloor central heating system, which could turn the hut into a lovely sauna.

Primitive technology looks like a fascinating hobby with a lot to teach us about how we got to now. But if you’re not into grubbing in the mud, you could always 3D print a clay hut. We’re not sure building an enormous delta-bot is any easier, though.

Thanks to [Rockyd] for the tip.


Filed under: classic hacks, misc hacks

Running Calculus on an Arduino

อังคาร, 01/19/2016 - 01:01

It was Stardate 2267. A mysterious life form known as Redjac possessed the computer system of the USS Enterprise. Being well versed in both computer operations and mathematics, [Spock] instructed the computer to compute pi to the last digit. “…the value of pi is a transcendental figure without resolution” he would say. The task of computing pi presents to the computer an infinite process. The computer would have to work on the task forever, eventually forcing the Redjac out.

Calculus relies on infinite processes. And the Arduino is a (single thread) computer. So the idea of running a calculus function on an Arduino presents a seemingly impossible scenario. In this article, we’re going to explore the idea of using derivative like techniques with a microcontroller. Let us be reminded that the derivative provides an instantaneous rate of change. Getting an instantaneous rate of change when the function is known is easy. However, when you’re working with a microcontroller and varying analog data without a known function, it’s not so easy. Our goal will be to get an average rate of change of the data. And since a microcontroller is many orders of magnitude faster than the rate of change of the incoming data, we can calculate the average rate of change over very small time intervals. Our work will be based on the fact that the average rate of change and instantaneous rate of change are the same over short time intervals.

Houston, We Have a Problem

In the second article of this series, there was a section at the end called “Extra Credit” that presented a problem and challenged the reader to solve it. Today, we are going to solve that problem. It goes something like this:

We have a machine that adds a liquid into a closed container. The machine calculates the amount of liquid being added by measuring the pressure change inside the container. Boyle’s Law, a very old basic gas law, says that the pressure in a closed container is inversely proportional to the container’s volume. If we make the container smaller, the pressure inside it will go up. Because liquid cannot be compressed, introducing liquid into the container effectively makes the container smaller, resulting in an increase in pressure. We then correlate the increase in pressure to the volume of liquid added to get a calibration curve.

The problem is sometimes the liquid runs out, and gas gets injected into the container instead. When this happens, the machine becomes non-functional. We need a way to tell when gas gets into the container so we can stop the machine and alert the user that there is no more liquid.

One way of doing this is to use the fact that the pressure in the container will increase at a much greater rate when gas is being added as opposed to liquid. If we can measure the rate of change of the pressure in the container during an add, we can differentiate between a gas and a liquid.

Quick Review of the Derivative

Before we get started, let’s do a quick review on how the derivative works. We go into great detail about the derivative here, but we’ll summarize the idea in the following paragraphs.

Full liquid add

An average rate of change is a change in position over a change in time. Speed is an example of a rate of change. For example, a car traveling at 50 miles per hour is changing its position at 50 mile intervals every hour. The derivative gives us an instantaneous rate of change. It does this by getting the average rate of change while making the time intervals between measurements increasingly smaller.

Let us imagine a car is at mile marker one at time zero. An hour later, it is at mile marker 51. We deduce that the average speed of the car was 50 miles per hour. What is the speed at mile marker one? How do we calculate that? [Issac Newton] would advise us to start getting the average speeds in smaller time intervals. We just calculated the average speed between mile marker 1 and 51. Let’s calculate the average speed between mile marker’s 1 and 2. And then mile marker’s 1 and 1.1. And then 1 and 1.01, then, 1.001…etc. As we make the interval between measurements smaller and smaller, we begin to converge on the instantaneous speed at mile marker one. This is the basic principle behind the derivative.

Average Rate of Change Gas enters between time T4 and T5

We can use a similar process with our pressure measurements to distinguish between a gas and a liquid. The rate of change units for this process is PSI per second. We need to calculate this rate as the liquid is being added. If it gets too high, we know gas has entered the container. First, we need some data to work with. Let us make two controls. One will give us the pressure data for a normal liquid add, as seen in the graph above and to the left. The other is the pressure data when the liquid runs out, shown in the graph on the right. Visually, it’s easy to see when gas gets in the system. We see a surge between time’s T4 and T5.  If we calculate the average rate of change between 1 second time intervals, we see that all but one of them are less that 2 psi/sec. Between time’s 4 and 5 on the gas graph, the average rate of change is 2.2 psi/sec. The next highest change is 1.6 psi/sec between times T2 and T3.

So now we know what we need to do. Monitor the rates of change and error out when it gets above 2 psi/sec.

Our psuedo code would look something like:

x = pressure; delay(1000); y = pressure; rateOfChange = (y - x); if (rateOfChange > 2) digitalWrite(13, HIGH);  //stop machine and sound alarm Instantaneous Rate of Change

It appears that looking at the average rate of change over a 1 second time interval is all we need to solve our problem. If we wanted to get an instantaneous rate of change at a specific time, we need to make that 1 second time interval smaller. Let us remember that our microcontroller is much faster than the changing pressure data. This gives us the ability to calculate an average rate of change over very small time intervals. If we make them small enough, the average rate of change and instantaneous rate of change are essentially the same.

Therefore, all we need to do to get our derivative is make the delay smaller, say 50ms. You can’t make it too small, or your rate of change will be zero. The delay value would need to be tailored to the specific machine by some old fashioned trial and error.

Taking the Limit in a Microcontroller?

One thing we have not touched on is the idea of the limit within a microcontroller. Mainly, because we don’t need it. Going back to our car example, if we can calculate the average speed of the car between mile marker one and mile marker 0.0001, why do we need to go though a limiting process? We already have our instantaneous rate of change with the single calculation.

One can argue that the idea behind the derivative is to converge on a single number while going though a limiting process. Is it possible to do this with incoming data of no known function? Let’s try, shall we? We can take advantage of the large gap between the incoming data’s rate of change and the processor’s speed to formulate a plan.

Let’s revisit our original problem and set up an array. We’ll fill the array with pressure data every 10ms. We wait 2 seconds and obtain 200 data points. Our goal is to get the instantaneous rate of change of the middle data point by taking a limit and converging on a single number.

We start by calculating the average rate of change between data points 100 and 200. We save the value to a variable. We then get the rate of change between points 100 and 150. We then compare our result to our previous rate by taking the difference. We continue this process of getting the rate of change between increasingly smaller amounts of time (from the 100th data point) and comparing them by taking the difference. When the difference is a very small number, we know we have converged on a single value.

We then repeat the process in the opposite direction. We calculate the average rate of change between data points 0 and 100. Then 25 and 100. Then 50 and 100 and so on. We continue the process just as before until we converge on a single number.

If our idea works, we’ll come up with two values that would look something like 1.3999 and 1.4001 We say our instantaneous rate of change at T1 is 1.4 psi per second. Then we just keep repeating this process.

Now it’s your turn. Think you have the chops to code this limiting process?


Filed under: Arduino Hacks, Hackaday Columns

Continuing Education via Wheel Balancing

จันทร์, 01/18/2016 - 23:31

There’s an old saying that you should make things twice. Once to figure out how to build the thing, and again to build it the right way. [Pmbrunelle] must agree. His senior project in college was a machine to balance wheels. It was good enough for him to graduate, but he wanted it to be even better.

The original machine required observation of measurements on an oscilloscope and manual calculations. [Pmbrunelle] added an AVR micro, a better motor drive, and made a host of other improvements. As you can see in the video below, the machine works, but [Pmbrunelle] still wasn’t happy.

He’s started the Mark II version of the project that will be a start-from-scratch redesign. One goal is to make the process faster (it currently takes about 30 minutes per wheel, which seems like a lot unless you are using it for a unicycle).

In the most recent incarnation, the AVR takes the wheel radius and collected data and tells you where to put the weight and, of course, how much weight to use. The Mark II will reuse many of the components in the existing machine although one of the goals is to replace some hard-to-find parts with things that are more readily available.

We couldn’t help but wonder if you could make a micro version of this for Pinewood Derby service. Although [Pmbrunelle] is using this project to learn more about electronics, following it might teach you something about mechanical engineering.


Filed under: misc hacks, repair hacks