Syndicate content Hackaday
Fresh hacks every day
ถูกปรับปรุง 1 ชั่วโมง 17 min ก่อน

Microsoft Surface Book Teardown Reveals Muscle Wire Mechanism

พุธ, 11/04/2015 - 19:01

It’s hard to resist the temptation to tear apart a shiny new gadget, but fortunately, iFixIt often does it for us. This helps to keep our credit cards safe, and reveal the inner workings of new stuff. That is definitely the case with the Microsoft Surface Book teardown that they have just published. Apart from revealing that it is pretty much impossible to repair yourself, the teardown reveals the mechanism for the innovative hinge and lock mechanism. The lock that keeps the tablet part in place when in laptop mode is held in place by a spring, with the mechanism being unlocked by a piece of muscle wire.

We are no strangers to muscle wire (AKA Nitinol wire or Shape Metal Alloy, as it is sometimes called) here: we have posted on its use in making strange robots, robotic worms and walls that breathe. Whatever you call it, it is fun stuff. It is normally a flexible wire, but when you apply a voltage, it heats up and contracts, much like the muscles in your body. Remove the voltage, and the wire cools and reverts to its former shape. In the Microsoft Surface Book, a single loop of this wire is used to retract the lock mechanism, releasing the tablet part.

Unfortunately, the teardown doesn’t go into much detail on how the impressive hinge of the Surface Book works. We would like to see more detail on how Microsoft engineered this into the small space that it occupies. The Verge offered some details in a post at launch, but not much in the way of specifics beyond calling it an “articulated hinge”.

UPDATE: This post was edited to clarify the way that muscle wire works. 11/4/15.

Filed under: computer hacks, teardown

Digging HDMI Out Of UDP Packets

พุธ, 11/04/2015 - 16:00

[Danman] was looking for a way to get the HDMI output from a camera to a PC so it could be streamed over the Internet. This is a task usually done with HDMI capture cards, either PCI or even more expensive USB 3.0 HDMI capture boxes. In his searches, [danman] sumbled across an HDMI extender that transmitted HDMI signals over standard Ethernet. Surely there must be a way to capture this data and turn it back.

The extender boxes [danman] found at everyone’s favorite chinese reseller were simple – just an Ethernet port, HDMI jack, and a power connector – and cheap – just $70 USD. After connecting the two boxes to his network and setting up his camera, [danman] listened in to the packets being set with Wireshark. The basic protocol was easy enough to grok, but thanks to the Chinese engineers and an IP header that was the wrong length, [danman] had to listen to the raw socket.

Once everything was figured out, [danman] was able to recover raw frames from the HDMI extenders, recover the audio, and stream everything to his PC with VLC. All the code is available, and if you’re looking for a way to stream HDMI to multiple locations on a network, you won’t find a better solution that’s this cheap.

Filed under: video hacks

Print Your Own Vertices for Quick Structural Skeletons

พุธ, 11/04/2015 - 13:01

3D printing is great for a lot of things: prototyping complex designs, replacing broken parts, and creating unique pencil holders to show your coworkers how zany you are. Unfortunately, 3D printing is pretty awful for creating large objects – it’s simply too inefficient. Not to mention, the small size of most consumer 3D printers is very limiting (even if you were willing to run a single print for days). The standard solution to this problem is to use off-the-shelf material, with only specialized parts being printed. But, for simple structures, designing those specialized parts is an unnecessary time sink. [Nurgak] has created a solution for this with a clever “Universal Vertex Module,” designed to mate off-the-shelf rods at the 90-degree angles that most people use.

The ingenuity of the design is in its simplicity: one side fits over the structural material (dowels, aluminum extrusions, etc.), and the other side is a four-sided pyramid. The pyramid shape allows two vertices to mate at 90-degree angles, and holes allow them to be held together with the zip ties that already litter the bottom of your toolbox.

[Nurgak’s] design is parametric, so it can be easily configured for your needs. The size of the vertices can be scaled for your particular project, and the opening can be adjusted to fit whatever material you’re using. It should work just as well for drinking straws as it does for aluminum extrusions.

Filed under: 3d Printer hacks

Strobe Light Slows Down Time

พุธ, 11/04/2015 - 10:00

Until the 1960s, watches and clocks of all kinds kept track of time with mechanical devices. Springs, pendulums, gears, oils, and a whole host of other components had to work together to keep accurate time. The invention of the crystal oscillator changed all of that, making watches and clocks not only cheaper, but (in general) far more accurate. It’s not quite as easy to see them in action, however, unless you’re [noq2] and you have a set of strobe lights.

[noq2] used a Rigol DG4062 function generator and a Cree power LED as a high-frequency strobe light to “slow down” the crystal oscillators from two watches. The first one he filmed was an Accutron “tuning fork” movement and the second one is a generic 32,768 Hz quartz resonator which is used in a large amount of watches. After removing the casings and powering the resonators up, [noq2] tuned in his strobe light setup to be able to film the vibrations of the oscillators.

It’s pretty interesting to see this in action. Usually a timekeeping element like this, whether in a watch or a RTC, is a “black box” of sorts that is easily taken for granted. Especially since these devices revolutionized the watchmaking industry (and a few other industries as well), it’s well worthwhile to take a look inside and see how they work. They’re used in more than just watches, too. Want to go down the rabbit hole on this topic? Check out the History of Oscillators.

Filed under: clock hacks

Making a Mobility Scooter Drastically More Mobile

พุธ, 11/04/2015 - 07:00

Do you have a spare mobility scooter sitting unused in your garage? Or, maybe you’ve got a grandmother who has been complaining about how long it takes her to get to bingo on Tuesdays? Has your local supermarket hired you to improve grocery shopping efficiency between 10am and 2pm? If you answered “yes” to any of those questions, then the guys over at Photon Induction have an “overclocked” mobility scooter build which should provide you with both inspiration and laughs.

They’ve taken the kind of inexpensive mobility scooter that can be found on Craigslist for a couple hundred dollars, and increased the battery output voltage to simultaneously improve performance and reduce safety. Their particular scooter normally runs on 24V, and all they had to do to drastically increase the driving speed was move that up to 60V (72V ended up burning up the motors).

Other than increasing the battery output voltage, only a couple of other small hacks were necessary to finish the build. Normally, the scooter uses a clutch to provide a gentle start. However, the clutch wasn’t up to the task of handling 60V, so the ignition switch was modified to fully engage the clutch before power is applied. The horn button was then used as the accelerator, which simply engages a solenoid with massive contacts that can handle 60V. The result is a scooter that is bound to terrify your grandmother, but which will get her to bingo in record time.

Filed under: transportation hacks

Cutest Possible Atari Disk Drive

พุธ, 11/04/2015 - 04:01

[rossumur]’s first computer was an Atari 400, and after riding a wave of nostalgia and forgetting the horrible keyboard found in the Atari 400, he decided it was time to miniaturize the venerable Atari 810 disk drive by putting an entire library of Atari games on a single microSD card.

SD cards have been slowly but surely replacing disk drives for just about every old computer system out there. You no longer need 400k disks for your old mac, and your Commodore 64 can run directly off an SD card. The Atari 8-bits have been somewhat forgotten in this movement towards modern solid state storage, and although a solution does exist, this implementation is a pretty pricey piece of hardware.

[rossumur]’s hardware for giving the Atari 8-bit computers an SD card slot is just one chip – an LPC1114 ARM Cortex M0. This, along with an SD card slot, 3.3V regulator, a LED and some caps allows the Atari to talk to SD card and hold the entire 8-bit Atari library on a piece of plastic the size of a fingernail.

Designing a circuit board doesn’t have the street cred it once did, and to give his project a little more pizzazz he chose to emulate the look of the very popular miniaturized Commodore 1541 disk drive with a tiny replica of the Atari 810 disk drive. This enclosure was printed at Shapeways, and with some enamel hobby paint, [rossumur] had a tiny, tiny 810 drive.

While this build does require the sacrifice of a somewhat rare and certainly old Atari SIO cable, it is by far the best solution yet seen for bringing a massive game library to the oft-forgotten Atari 8-bit home computers.

Thanks [lucas] for the tip.

Filed under: classic hacks, Microcontrollers

iPhone Jailbreak Hackers Await $1M Bounty

พุธ, 11/04/2015 - 02:31

According to Motherboard, some unspecified (software) hacker just won a $1 million bounty for an iPhone exploit. But this is no ordinary there’s-a-glitch-in-your-Javascript bug bounty.

On September 21, “Premium” 0day startup Zerodium put out a call for a chain of exploits, starting with a browser, that enables the phone to be remotely jailbroken and arbitrary applications to be installed with root / administrator permissions. In short, a complete remote takeover of the phone. And they offered $1 million. A little over a month later, it looks like they’ve got their first claim. The hack has yet to be verified and the payout is actually made.

But we have little doubt that the hack, if it’s actually been done, is worth the money. The NSA alone has a $25 million annual budget for buying 0days and usually spends that money on much smaller bits and bobs. This hack, if it works, is huge. And the NSA isn’t the only agency that’s interested in spying on folks with iPhones.

Indeed, by bringing something like this out into the open, Zerodium is creating a bidding war among (presumably) adversarial parties. We’re not sure about the ethics of all this (OK, it’s downright shady) but it’s not currently illegal and by pitting various spy agencies (presumably) against each other, they’re almost sure to get their $1 million back with some cream on top.

We’ve seen a lot of bug bounty programs out there. Tossing “firmname bug bounty” into a search engine of your choice will probably come up with a hit for most firmnames. A notable exception in Silicon Valley? Apple. They let you do their debugging work for free. How long this will last is anyone’s guess, but if this Zerodium deal ends up being for real, it looks like they’re severely underpaying.

And if you’re working on your own iPhone remote exploits, don’t be discouraged. Zerodium still claims to have money for two more $1 million payouts. (And with that your humble author shrugs his shoulders and turns the soldering iron back on.)

Filed under: iphone hacks, news, security hacks

No Pascal, not a SNOBOL’s chance. Go Forth!

พุธ, 11/04/2015 - 01:01

My article on Fortran, This is Not Your Father’s FORTRAN, brought back a lot of memories about the language. It also reminded me of other languages from my time at college and shortly thereafter, say pre-1978.

At that time there were the three original languages – FORTRAN, LISP, and COBOL. These originals are still used although none make the lists of popular languages. I never did any COBOL but did some work with Pascal, Forth, and SNOBOL which are from that era. Of those, SNOBOL quickly faded but the others are still around. SNOBOL was a text processing language that basically lost out to AWK, PERL, and regular expressions. Given how cryptic regular expressions are it’s amazing another language from that time, APL – A Programming Language, didn’t survive. APL was referred to as a ‘write only language’ because it was often easier to simply rewrite a piece of code than to debug it.

Another language deserving mention is Algol, if only because Pascal is a descendant, along with many modern languages. Algol was always more popular outside the US, probably because everyone there stuck with FORTRAN.

Back then certain books held iconic status, much like [McCracken’s] black FORTRAN IV. In the early 70s, mentioning [Nicolas Wirth] or the yellow book brought to mind Pascal. Similarly, [Griswold, (R. E.)] was SNOBOL and a green book. For some reason, [Griswold’s] two co-authors never were mentioned, unlike the later duo of [Kernighan] & [Ritchie] with their white “The C Programming Language”. Seeing that book years later on an Italian coworker’s bookshelf translated to Italian gave my mind a minor boggling. Join me for a walk down the memory lane that got our programming world to where it is today.


The creator of Pascal, [Nicholas Wirth], was something of an iconoclast of his times. Pascal was a rebellion against Algol 60, which he felt was overly complicated and not useful for real work. I’d love to hear his opinion of Java and C++. Working without the constraints of a standards committee he pared Algol down to the bones to create Pascal.

[Wirth] was a hacker at heart:

Every single project was primarily a learning experiment. One learns best when inventing. Only by actually doing a development project can I gain enough familiarity with the intrinsic difficulties and enough confidence that the inherent details can be mastered. – From Programming Language Design To Computer Construction, Niklaus Wirth 1984 ACM A.M. Turing Award Recipient Lecture.

The language was so compact it was completely described and defined in a 58 page book, almost a pamphlet. This included showing it in Backus-Naur Form, and a novel graphic representation that became known as railroad diagrams. Being visually oriented I really liked that representation.

An interesting feature is Pascal was bootstrapped into existence. A bootstrapped language’s compiler is written in itself, using the smallest subset possible. That compiler is then hand written in another language, probably Fortran for that time frame. Once the hand written compiler works you can feed its executable the compiler code written in the new language. Out comes an executable that will now compile itself. From there you can write the additional features in the language itself. Bootstrapping was a boon for computer science students as it provided them with a working compiler they could extend and hack but was still small enough to comprehend.

Pascal was a one pass compiler while almost all other languages make multiple passes through the code or intermediate outputs. This made compilation fast. A drawback was it didn’t provide much optimization.

Some Pascal code:

program ArithFunc; const Sentinel =0.0; var X:Real; begin writeln('After each line enter a real number or 0.0 to stop'); writeln; writeln('X', 'Trunc(x)' :16, 'Round(X)' :10, 'Abs(X)' :10, 'Sqr(X)' :10, 'Sqrt(Abs(X))' :15); readln (X); while X <> Sentinel do begin writeln (Trunc(X) :17, Round(X) :10, Abs(X) :10:2, Sqr(x) :10:2, Sqrt(Abs(X)) :10:2); readln(X); end end.

You can see the relationship of Pascal to current languages like C, C++, and Java, all derived from versions of Algol. One of the big complaints about Pascal is it didn’t use semi-colons enough. You can see in the example there is no semicolon prior to the final end. Another characteristic is the keywords begin and end marked blocks of code, where in today’s languages we use braces.

Pascal is still in use, having gained widespread popularity with the advent of the PC. The first PC compiler used UCSD Pascal P-code, an interpreter engine much like today’s Java. P-code was developed to allow easy porting of the language to new machines, the virtual machine being easier to write than a compiler.

The big boom for Pascal on the PC was the release of Borland’s Turbo Pascal (TP) in 1983. It was cheap so hackers could afford it and it had an integrated programming toolkit, what we’d today call an IDE.

I lay claim to using TP for an early, if not first, industrial usage. A team I led developed the software for an Intel 8088 based flow computer, i.e. an embedded system used to measure the flow in pipelines of gas or liquids. I developed an IDE using TP that generated bytecodes that were downloaded to the flow computer. The remainder of the team developed the flow computer software which included an interpreter for the bytecodes. The system was basically Lotus 1-2-3 – Excel for you youngsters – for pipelines since end users could develop their own code.

Turbo Pascal added object oriented features in the ’90s and finally morphed into Delphi.


SNOBOL was used for processing text and symbols. The name really isn’t meaningful, although some tried to shoehorn acronyms and portmanteaus. Griswold and others were sitting around talking about the name, shooting rubber bands, when they decided they “…didn’t have a snowball’s chance in hell in finding a name.” They shortened ‘snow’ and substituted ‘BOL’, which was the ending for many languages of that time.

(Rubber bands were used on card decks so shooting them was a favorite nerd pastime back then. I shot a cigarette out of a young ladies’ lips with a rubber band once. I was good!)

Here is some code to give you a sense of the language:

OUTPUT = "What is your name?" Username = INPUT Username "J" :S(LOVE) Username "K" :S(HATE) MEH OUTPUT = "Hi, " Username :(END) LOVE OUTPUT = "How nice to meet you, " Username :(END) HATE OUTPUT = "Oh. It's you, " Username END

Like many langauges of the time SNOBOL used a single line card format with a special flag character allowing continuation onto other cards. The first field was a destination label. The middle field contained the programming statements. The right most field, following a ‘:’, was test and jump.

The OUTPUT and INPUT lines are fairly obvious as is the assignment. After the assignment are two lines that perform a pattern match to see if the letters ‘J’ or ‘K’ are in Username. If the letters appear the line succeeds. On success, flagged by the ‘S‘ in the test field, a jump is made to the destination label. You could also ‘F‘ail, do both, or just jump, as shown by the jump to END in the code.

SNOBOL did all the text manipulation you would expect: concatenation, replacement, extraction, etc. It also did the usual math operations.

It was an interesting language to play with; very easy to understand and work with.


Forth is an intriguing language in multiple ways. It came neither from the computer industry nor academia. [Chuck Moore] developed it working a number of jobs but it came to fruition at the National Radio Astronomy Observatory developing embedded systems for the Kitt Peak observatory. From the start, Forth was meant to be portable. When [Chuck] changed jobs he wanted to take his toolkit along.

Forth is a threaded language. In our more traditional languages code is translated into a sequence of real or virtual machine opcodes. The machine then steps through the op-codes. In a threaded language only primitive operations are coded as opcodes. In Forth these primitives are the basic words of the language. All other defined words are built on the primitives.

Instead of opcodes a threaded language stores the address of the word. An inner interpreter in Forth steps through this list of addresses. If the address is that of a primitive word the opcodes are executed. When it is a defined word the inner interpreter jumps to that sequence and begins walking its list of addresses. A word at the end of a sequence causes a return to the previous word definition. The process is straightforward and surprisingly fast. It also makes threaded languages very portable. Once the primitives and the inner interpreter are created Forth is pretty much up and running. All the rest of the language is based on defined words.

An outer interpreter provides the user interface. You simply type in words and they are executed. There are words that allow you to define new words which are added to the dictionary. Revising code when working at the user interface is easy: you just enter the word definition again. The new definition overrides the older definition.

A compilation process walks through the dictionary selecting only the words actually used by other words, strips out all the parts needed for interactive processing, and creates a ‘binary’ containing the primitive and defined words that are needed. This, like the execution speed, is a surprisingly compact representation often smaller than standard compiled code.

Forth is like a giant hand-held calculator from Hewlett Packard: it uses a stack and reverse Polish notation (RPN). To add two numbers you’d enter: 2 3 +. This would push the 2 and 3 onto a stack. The + would pop them and add them leaving the resultant, 5, on the stack. While you can define variables most operations are done using the stack.

Here’s a Forth word definition and example usage for squaring a number:

> : sqr dup * ;
> 2 sqr .
> 3 sqr .

A word definition starts with the colon. The first token after that is the name. It can be anything! Forth often defines some numbers as words because it is actually faster to load the value from a word than convert it. If you want to drive someone nuts you can define 2 as 1 by entering “: 2 1 ;”.

Continuing with the example, the word dup creates a copy of the top of the stack and the ‘*’ multiplies the top two values leaving the resultant on the stack. In the lines where sqr is used the ‘.’ prints the top of the stack and removes the value.

You can try Forth using an online interpreter.

Back in the days of computers using CPM as the OS I wrote a text editor using Forth. The editor allowed the use of cursor keys, delete, insert, etc. It took only an hour. My introduction to Forth was at the University of Rochester Laboratory for Laser Energetics. Forth was used to develop the control system for the large laser fusion experiment they were undertaking. I even went to Manhattan Beach, CA for training at the Forth, Inc. offices. The second Forth programmer in the world, Elizabeth Rather, was my instructor.

Forth is still being used in aerospace and commercial applications.

I suggested in the Fortran article that someone create a FortDuino. I can’t make the same suggestion for Forth since it already exists in multiple incarnations. Besides, that would be fairly easy. It’s what Forth is meant to do. A larger challenge is a PiForth, or ForthPi, running as the operating systems loaded from the SD card. We’ve seen a Lisp version so how about Forth? Now that would be interesting.

Wrap Up

I hope you’ve enjoyed these two articles that walked through some history of programming languages from my personal experience. It was fun to bring back the memories and refresh the details by searching the web. The relationships among languages are fascinating, which reminds me of one last story.

The University of Rochester’s computer center, by some quirk, was in the basement of a Holiday Inn on the campus. Jean Sammet, another of the famous ladies in computing history, stayed at the Inn for an ACM conference. She commented that her sleep was disturbed because she kept dreaming of line printers for some strange reason. Nobody had the heart to tell her they may have been right under her room. I recalled this story because one of Jean’s books included a chart of programming language ancestry similar to the chart in the above link.

Filed under: classic hacks, Hackaday Columns, Software Development

Check Out Who’s Speaking at the Hackaday SuperConference

อังคาร, 11/03/2015 - 23:31

The Hackaday SuperConference is just eleven short days from now! We’ve put together a conference that is all about hardware creation with a side of science and art. Join hundreds of amazing people along with Hackaday crew for a weekend of talks, workshops, and socializing.

Below you will find the full slate of talks, and last week we revealed the lineup of hands-on workshops. We’ve expanded a few of the more popular workshops. If you previously tried to get a ticket and found they were sold out, please check again. We know many of you are working on impressive projects in your workshops, so bring them and sign up for a lightning talk at registration.

This is a gathering of people who make the hardware world go round, and that includes you. Apply now to attend the 2015 Hackaday SuperConference.


2015 Hackaday SuperConference Talks:

Shanni R. Prutchi

Construction of an Entangled Photon Source for Experimenting with Quantum Technologies

Minas Liarokapis

OpenBionics: Revolutionizing Prosthetics with Open-Source Dissemination

Fran Blanche

Fun and Relevance of Antiquated Technology

Danielle Applestone

Founding a hardware startup: what I wish I’d known!

Luke Iseman

Starting a Hardware Startup

Grant Imahara

Recapping Mythbusters and his Engineering Career follow by a Fireside Chat

Noah Feehan

Making in Public

Jeroen Domburg

Implementing the Tamagotchi Singularity

Sarah Petkus

NoodleFeet: Building a Robot as Art

Alvaro Prieto

Lessons in Making Laser Shooting Robots

Zach Fredin

You Can Take Your Hardware Idea Through Pilot-Scale Production With Minimal Prior Experience And Not Very Much Money, So You Should Do It NOW!!

Kate Reed

The Creative Process In Action

Oscar Vermeulen

PiDP-8: Experiences developing an electronics kit

Reinier van der Lee

The Vinduino Project

Radu Motisan

Global environmental surveillance network

David Prutchi

Construction of Imaging Polarimetric Cameras for Humanitarian Demining

Rory Aronson

Why great documentation is vital to open-source projects

Jonathan Beri

I like to move it, move it: a pragmatic guide to making your world move with motors!

Neil Movva

Adding (wearable) Haptic Feedback to Your Project

Dustin Freeman

The Practical Experience of Designing a Theatre Experience around iBeacons

Kay Igwe

Brain Gaming

Filed under: cons, Featured

The Evolution of Oscillations

อังคาร, 11/03/2015 - 22:01

The laptop I’m using, found for 50 bucks in the junk bins of Akihabara has a CPU that runs at 2.53GHz. Two billion five hundred and thirty million times every second electrons systematically briefly pulse. To the human mind this is unimaginable, yet two hundred years ago humanity had no knowledge of electrical oscillations at all.

There were clear natural sources of oscillation of course, the sun perhaps the clearest of all. The Pythagoreans first proposed that the earth’s rotation caused the suns daily cycle. Their system was more esoteric and complex than the truth as we now know it and included a postulated Counter-Earth, lying unseen behind a central fire. Regardless of the errors their theory contained, a central link was made between rotation and oscillation.

And rotational motion was exploited in early electrical oscillators. Both alternators, similar to those in use today, and more esoteric devices like the interrupter. Developed by Charles Page in 1838, the interrupter used rocking or rotational motion to dip a wire into a mercury bath periodically breaking a circuit to produce a simple oscillation.

As we progressed toward industrial electrical generators, alternating current became common. But higher and higher frequencies were also required for radio transmitters. The first transmitters had used spark gaps. These simple transmitters used a DC supply to charge a capacitor until it reached the breakdown voltage of the gap between two pieces of wire. The electricity then ionized the air molecules in the gap. Thus allowing current to flow, quickly discharging the capacitor. The capacitor charged again, allowing the process to repeat.

An Alexanderson Alternator

As you can see and hear in the video above spark gaps produce a noisy, far from sinusoidal output. So for more efficient oscillations, engineers again resorted to rotation.

The Alexanderson alternator uses a wheel on which hundreds of slots are cut. This wheel is placed between two coils. One coil, powered by a direct current, produces a magnetic field inducing a current in the second. The slotted disc, periodically cutting this field, produces an alternating current. Alexanderson alternators were used to generate frequencies of 15 to 30 KHz, mostly for naval applications. Amazingly one Alexanderson alternator remained in service until 1996, and is still kept in working condition.

A similar principal was used in the Hammond organ. You may not know the name, but you’ll recognize the sound of this early electronic instrument:

The Hammond organ used a series of tone wheels and pickups. The pickups consist of a coil and magnet. In order to produce a tone the pickup is pushed toward a rotating wheel which has bumps on its surface. These are similar to the slots of the Alexanderson Alternator, and effectively modulate the field between the magnet and the coil to produce a tone.

Amplifying the Oscillation The operation of a tank circuit (from wikipedia)

So far we have purely relied on electromechanical techniques, however amplification is key to all modern oscillators, for which of course you require active devices. The simplest of these uses an inductor and capacitor to form a tank circuit. In a tank circuit energy sloshes back and forth between an inductor and capacitor. Without amplification, losses will cause the oscillation to quickly die out. However by introducing amplification (such as in the Colpitts oscillator) the process can be kept going indefinitely.

Oscillator stability is important in many applications such as radio transmission. Better oscillators allow transmissions to be packed more closely on the spectrum without fear that they might drift and overlap. So the quest for better, more stable oscillators continued. Thus the crystal oscillator was discovered, and productionized. This was a monumental effort.

Producing Crystal Oscillators

The video below shows a typical process used in the 1940s for the production of crystal oscillators:

Natural quartz crystals mined in Brazil were shipped to the US, and processed. I counted a total of 13 non-trivial machining/etching steps and 16 measurement steps (including rigorous quality control). Many of these quite advanced, such as the alignment of the crystal under an X-Ray using a technique similar to X-Ray crystalography.

These days our crystal oscillator production process is more advanced. Since the 1970s crystal oscillators have been fabricated in a photolithographic process. In order to further stabilize the crystal additional techniques such as temperature compensation (TCXO) or operating the crystal at a temperature controlled by the use of a heating element (OCXO) have been employed. For most applications this has proved accurate enough… Not accurate enough however for the timenuts.

Timenuts Use Atoms Typical timenut wearing atomic wristwatch

For timenuts there is no “accurate enough”. These hackers strive to create the most accurate timing systems they can, which all of course rely on the most accurate oscillator they can devise.

Many timenuts rely on atomic clocks to make their measurements. Atomic clocks are an order of magnitude more precise than even the best temperature controlled crystal oscillators.

Bill Hammack has a great video describing the operation of a cesium beam oscillator. The fundamental process is shown in the image below. The crux is that cesium gas exists in two energy states, which can be separated under a magnetic field. The low energy atoms are exposed to a radiation source, the wavelength of which is determined by a crystal oscillator. Only a wavelength of exactly 9,192,631,770Hz will convert the low energy cesium atoms to the high energy form. The high energy atoms are directed toward a detector, the output of which is used to discipline the crystal oscillator, such that if the frequency of the oscillator drifts and the cesium atoms are no longer directed toward the detector its output is nudged toward the correct value. Thus a basic physical constant is used to calibrate the atomic clock.

The basic operating principle of a cesium atomic clock

While cesium standards are the most accurate oscillators known, Rubidium oscillators (another “atomic” clock) also provide an accurate and relatively cheap option for many timenuts. The price of these oscillators has been driven down due to volume production for the telecoms industry (they are key to GSM and other mobile radio systems) and they are now readily available on eBay.

With accurate time pieces in hand timenuts have performed a number of interesting experiments. To my mind the most interesting of these is measuring time differences due to relativistic effects. As is the case with one timenut who took his family and a car full of atomic clocks up Mt. Rainier for the weekend. When he returned he was able to measure a 20 nanosecond difference between the clocks he took on the trip and those he left at home. This time dilation effect was almost exactly as predicted by the theory of relativity. An impressive result and an amazing family outing!

It’s amazing to think that when Einstein proposed the theory of special relatively in 1905, even primitive crystal oscillators would not have been available. Spark gap, and Alexanderson alternators would still have been in everyday use. I doubt he could imagine that one day the fruits of his theory would be confirmed by one man, on a road trip with his kids as a weekend hobby project. Hackers of the world, rejoice.

Filed under: clock hacks, Featured

Building Memristors For Neural Nets

อังคาร, 11/03/2015 - 19:00

Most electronic components available today are just improved versions of what was available a few years ago. Microcontrollers get faster, memories get larger, and sensors get smaller,  but we haven’t seen a truly novel component for years or even decades. There is no electronic component more interesting with more novel applications than the memristor, and now they’re available commercially from Knowm, a company that is on the bleeding edge of putting machine learning directly onto silicon.

The entire point of digital circuits is to store information as a series of ones and zeros. Memristors as well store information, but do so in a completely analog way. Each memristor changes its own resistance in response to the current going through it; ‘writing’ a positive voltage lowers the resistance, and ‘writing’ a negative voltage puts the device back into a high resistance state.

Cross section of the metal chalcogenide memristor. Source: Knowm.org

This new memristor is based on research done by [Dr. Kris Campbell] of Boise State University – the same researcher responsible for silver chalcogenide memristors we saw earlier this year. Like these earlier devices, the Knowm memristror is built using silver chalcogenide molecules. To lower the resistance of the memristor, a positive voltage ‘pulls’ silver ions into the metal chalcogenide layer. The silver ions stay in this chalcogenide layer until they are ‘pushed’ back with the application of a negative voltage. This gives the memristor it’s core functionality – being able to remember how much current has gone through it.

This technology is different from the first memristors made by HP in 2008, and has allowed Knowm to create functional memristors on silicon with a relatively high yield. Knowm is currently selling a ‘tier 3’ memristor part that only has two out of eight devices failing QC testing. A ‘tier 1’ part, with all eight memristors working, is available for $220 USD.

As for applications for this memristor, Knowm is using this technology in something they call Thermodynamic RAM, or kT-RAM. This is a small coprocessor that allows for faster machine learning than would be possible with a computer with a much more traditional architecture. This kT-RAM uses a binary tree layout with memristors serving as the links between nodes.

While it’s much too soon to say if a kT-RAM processor will be better or more efficient at performing machine learning tasks in real life, a machine learning coprocessor does have a faint echo of the machine learning silicon developed during the 80s AI renaissance. Thirty years ago, neural nets on a chip were created by a few companies around Boston, until someone realized these neural nets could be simulated on a desktop PC much more efficiently. The kT-RAM is somewhat novel and highly parallel, though, and with a new electronic component it could be just what is needed to push machine learning directly into silicon.

Filed under: news

Gameboy Camera Becomes Camcorder

อังคาร, 11/03/2015 - 16:01

[Furrtek] is a person of odd pursuits, which mainly involve making old pieces of technology do strange things. That makes him a hero to us, and his latest project elevates this status: he built a device that turns the Nintendo Gameboy camera cartridge into a camcorder. His device replaces the Gameboy, capturing the images from the camera, displaying them on the screen and saving them to a micro SD card.

Before you throw out your cellphone or your 4K camcorder, bear in mind that the captured video is monochrome (with only 4 levels between white and black), at a resolution of 128 by 112 pixels and at about 14 frames per second. Sound is captured at 8192Hz, producing the same buzzy,  grainy sound that the Gameboy is famous for. Although it isn’t particularly practical, [Furrtek]s build is extremely impressive, built around an NXP LPC1343 ARM Cortex-M3 MCU processor. This processor repeatedly requests an image from the camera, receives the image and then collects the images and sound together to form the video and save it to the micro SD card. As always, [Furrtek] has made all of the source code and other files available for anyone who wants to try it out.

For those who aren’t familiar with his previous work, [Furrtek] has done things like making a Speak & Spell swear like a sailor, adding a VGA out to a Virtualboy, and hacking a Gameboy Color to control electronic shelf labels.

Filed under: nintendo hacks

Escape Cable Hell with an Audio I/O Multiplexer

อังคาร, 11/03/2015 - 13:01

If you ever find yourself swapping between a mix of audio inputs and outputs and get tired of plugging cables all the time, check out [winslomb]’s audio multiplexer with integrated amplifier. The device can take any one of four audio inputs, pass the signal through an amplifier, and send it to any one of four outputs.

The audio amplifier has a volume control, and the inputs and outputs can be selected via button presses. An Arduino Pro Mini takes care of switching the relays based on the button presses. On the input side, you can plug in devices like a phone, TV, digital audio player or a computer. The output can be fed to speakers, headsets or earphones.

At the center of the build lies a TI TPA152 75-mW stereo audio power amplifier. This audio op-amp is designed to drive 32 ohm loads, so performance might suffer when connecting it to lower impedance devices, but it seems to work fine for headphones and small computer speakers. The dual-gang potentiometer controls the volume, and the chip has a useful de-pop feature. The circuit is pretty much a copy of the reference shown in the data sheet. Switching between inputs or outputs is handled by a bank of TLP172A solid state relays with MOSFET outputs, and it’s all tied together with a micro-controller, allowing for WiFi or BLE functionality to be added on later.

[winslomb] laid out the design using Eagle and he made a couple of footprint mistakes for the large capacitors and the opto-relays. (As he says, always double-check part footprints!) In the end, he solder-bridged them on to the board, but they should probably be fixed for the next revision.

[winslomb] built the switch as his capstone project while on his way to getting a Masters in EE, and although the device did function as required, there is still room for improvement. The GitHub repository contains all the hardware and software sources. Check out the video below where he walks through a demo of the device in action. If you are looking for something simpler, here is a two input – one output audio switcher with USB control and on the other end of the spectrum, here’s an audio switch that connects to the Internet.

Filed under: digital audio hacks, news

Hack The Steam Controller?

อังคาร, 11/03/2015 - 10:01

[willrandship] sent in a conversation from Reddit discussing the programming ports inside the Steam controller and their potential for hacking. From the posts and the pictures it seems the radio/SoC and the MCU can be programmed on the board, or at least they both have JTAG headers. The JTAG headers are in the form of “Tag-Connect” pads on the board so it will require the dedicated cable or soldering some hardware to the board temporarily.

From the pictures we can see a NXP LPC11U37F ARM Cortex-M0 and a Nordic nRF51822 ARM Cortex-M0 SoC with integrated Bluetooth low energy. There are only a limited number of Steam Controllers in the wild at this time so we don’t expect much in the way of hacking them thus far. There is a Steam Controller hackaday.io project just started for anyone who would like to contribute to the Steam Controller hacking.

The controller is available for pre-order on the Steam website for $49.99 but alas, it is just a pre-order, and we can’t play with it today. The semi-good news is that its still possible to have one before Christmas if you’re willing to pay for a hamburger today.

In case you missed it, we covered Steam’s statement about the controller being open and hackable back in 2013. We also discussed the JTAT “tag-connector” a while back.


Filed under: handhelds hacks, news

Dalek-Berry-Pi Mower

อังคาร, 11/03/2015 - 07:01

There’s something about lawn mowers and hackers. A desire to make them into smart, independent robots. Probably in preparation for the day when Skynet becomes self-aware or the Borg collective comes along to assimilate them into the hive. [Ostafichuk] wanted his to be ready when that happens, so he’s building a Raspberry-Pi powered, Dalek costumed Lawn Mower that is still a work in progress since starting on it in 2014. According to him,  “commercial robot lawn mowers are too expensive and not nearly terrifying enough to be any fun, so I guess I will just have to build something myself…”

His first report describes the basic skeletal structure he built using scrap pieces of wood. Two large lawn tractor wheels and a third pivot wheel help with locomotion. The two large wheels are driven by geared motors originally meant for car seat height adjustments. A deep cycle 12V battery, and solar panels for charging would take care of power. A raspberry-pi provides the brain power for the Dalek-Mower and L298N based drivers help drive the motors. The body was built from some more planks of scrap wood that he had lying around. While waiting around for several parts to arrive – ultrasonic sensors, accelerometer, 5V power supply modules – he started to paint and decorate the wood work. Generous amounts of water repellent paint and duct tape were used to make it weather proof. His initial plan was to use python for the code, but he later switched to programming in c along with wiringPi library. Code for the project is available from his bitbucket git repository. Load testing revealed that the L298N drivers were not suitable for the high current drawn by the motors, so he changed over to relays to drive them.

Before starting work on the lawn mowing part, the Dalek-Mower was put to good use during Halloween, after which he put the hardware in storage and got to work on the code. One of his goals was to use the Raspberry-pi cameras for a vision system and markers spread around his yard as waypoints. Getting the RaspiCam to work proved more difficult than he anticipated. After a fruitless struggle to get his c code working he moved to using C++. This part still looks like it needs refinement and it seems to be stretching the Pi to its computing limits. Eventually, he removed all of the raspicam functionality that he had coded, and instead used another video streaming solution.

A web interface now allows him to remotely control functions on the Dalek-Mower such as motor movement controls, playing audio clips, lighting control and several others for future use – Mower, Patrol, Fire Nerf Gun. [ostafichuk] has put in quite a lot of work over almost a year, and we hope he gets the Dalek-Mower ready for whatever sci-fi scenario comes our way in the future. Check the video below of the Dalek-Mower controlled via tablet by [ostafichuk]-junior.

And while on the subject, check out this Robotic Lawn Mower which is the only household robot to make the semifinalists in The Hackaday Prize this year.

Filed under: Holiday Hacks, robots hacks

Tracking the Hamster Marathon

อังคาร, 11/03/2015 - 03:31

[Michelle Leonhart] has two Roborovski hamsters (which, despite the name, are organic animals and not mechanical). She discovered that they seem to run on the hamster wheel all the time. A little Wikipedia research turned up an interesting factoid: This particular breed of hamster is among the most active and runs the equivalent of four human marathons a night. Of course, we always believe everything we read on Wikipedia, but not [Michelle]. She set out to determine if this was an accurate statement.

She had already added a ball bearing to the critters’ wheel to silence it by cannibalizing an old VCR. What she needed was the equivalent of a hamster pedometer. A Raspberry Pi and a Hall effect sensor did the trick. At least for the raw measurement. But it still left the question: how much distance is a hamster marathon?

[Michelle] went all scientific method on the question. She determined that an average human female’s stride is 2.2 feet which works out to 2400 strides per mile. A marathon is 26.2 miles (based on the distance Pheidippides supposedly ran to inform Athens of victory after the battle of Marathon). This still left the question of the length of a hamster’s stride. Surprisingly, there was no definitive answer, and [Michelle] proposed letting them run through ink and then tracking their footsteps. Luckily, [Zed Shaw] heard about her plan on Twitter and suggested pointing a webcam up through the plastic bottom of the cage along with a scale. That did the trick and [Michelle] measured her hamster’s stride at about 0.166 feet (see right).

Now it was a simple matter of math to determine that a hamster marathon is just under 10,500 steps. Logging the data to SQLite via ThingSpeak for a month led [Michelle] to the conclusion: her hamsters didn’t run 4 marathon’s worth of steps in a night. In fact, they never really got much over 2 marathons.

Does [Michelle] have lazy hamsters, or did she just add to our body of scientific knowledge about rodents? We don’t know. But we couldn’t help but admire her methods and her open source data logging code would probably be useful for some non-hamster activities.

If you are super competitive, you could use [Michelle’s] data to handicap yourself and challenge your pets to a race. But it would probably be cooler to build them their own Starship Trooper-style walkers. Either way, you can check out [Michelle’s] little marathon runners in the video below.

Filed under: Raspberry Pi

The Eloquence of the Barcode

อังคาร, 11/03/2015 - 00:01

Beep. You hear it every time you buy a product in a retail store. The checkout person slides your purchase over a scanner embedded in their checkout stand, or shoots it with a handheld scanner. The familiar series of bars and spaces on the label is digitized, decoded to digits, and then used as a query to a database of every product that particular store sells. It happens so often that we take it for granted. Modern barcodes have been around for 41 years now. The first product purchased with a barcode was a 10 pack of Juicy Fruit gum, scanned on June 26, 1974 at Marsh supermarket in Troy, Ohio. The code scanned that day was UPC-A, the same barcode used today on just about every retail product you can buy.

The history of the barcode is not as cut and dry as one would think. More than one group has been credited with inventing the technology. How does one encode data on a machine, store it on a physical media, then read it at some later date? Punch cards and paper tape have been doing that for centuries. The problem was storing that data without cutting holes in the carrier. The overall issue was common enough that efforts were launched in several different industries.

In the 1930’s, John Kermode, Douglas Young, and Harry Sparkes created a four bar barcode. They were Westinghouse engineers, and not surprisingly the application was to automate the payment processing of electric power bills. The patents however, were generalized as “Card sorters”.

In 1948, Bernard Silver and Joseph Woodland began work on a system for reading linear and circular printed codes for supermarkets. They took their inspiration from optical audio tracks used in 16mm and 35mm film. In fact, their reader employed an RC935 photomultiplier tube normally used in movie projectors. Silver and Woodland are often credited as inventors of the barcode, but they reference the Westinghouse patent in their own work. Several companies including IBM took interest in the patent, but determined that key technologies still needed to be developed before it would be a practical system. Philco bought the patent, eventually selling it to RCA.

Perhaps the most infamous claim to the barcode throne came from Jerome H. Lemelson. Lemelson was granted over 600 patents in his lifetime, including some for machine vision. Many of these were considered submarine patents. He made most of his fortune by enforcing and licensing those patents to the tune of 1.3 billion dollars. This branded him as an early patent troll. Lemelson’s barcode patents were declared unenforceable in a landmark 2004 court case against Cognex Corporation and Symbol Technologies. This case is often referenced in patent troll litigation today.

What we know as the modern barcode got its start in the late 1960’s. Local markets were evolving into supermarkets. Checkout systems with mechanical cash registers were the obvious bottleneck. But how to speed things up? Grocery trade associations created the Uniform Grocery Product Code Council (now GS1) to tackle the problem. GS1 solicited solutions and received proposals from RCA, IBM, Singer, Dymo, Litton, and Pitney Bowes, among others. RCA drew on the Silver and Woodland patent to create bulls-eye code. IBM may not have had the patent, but they had something better. Joseph Woodland had been an IBM employee for several years at that point. he was recruited to a team which included George Laurer. Laurer is still active in the industry, maintaining a webpage with information about barcodes. The team worked hard to design a robust code. In the end it was the IBM code that became the Universal Product Code (UPC) we all have come to know.

The UPC symbology has remained relatively unchanged since 1974. There have been some extensions to encode extra data, but the core has endured as a long-lasting standard. Once the code was in use, a revision would require massive changes from the printing industry all the way through the point of sale industry.

Building a Barcode

UPC-A is a numeric only symbology. It’s also a fixed width. Each UPC-A symbol encodes twelve digits, however one digit is used as a check character, leaving only eleven usable digits. The framework of the code starts with a quiet zone, which is literally a quiet area with the same color as the spaces. Just inside the quiet zone a guard bar, which is a unique pattern that defines the start (or end) of the code. UPC-A has quiet zones and guard bars at the start and end of the code. A unique center guard bar defines the middle of the symbol. The rest of the code is made up of twelve characters.

To envision how a UPC-A encodes data, think of morse code. If one drew all the dots and dashes of a morse code message, they would have a rudimentary barcode. In practice, the Morse character set doesn’t work very well because it uses variable length characters. A ‘T’ is one dash, while a ‘Y’ is three dashes and a dot. Determining where one characters ends and another begins would require spaces to be added between every character. That works in Radio communications, but becomes inefficient on the printed page.

Characters – It’s all in the widths

Individual UPC characters are also fixed width. The basic unit of length is called a module, which represents the smallest bar or space used in the symbology. The nominal module size used by UPC-A is 0.33 mm. Each UPC character is made two bars and two spaces, with a total length of 7 modules. For the digit 0 on the left side of a UPC-A, the character is 3,2,1,1 – meaning a space 3 modules wide, followed by a bar 2 modules wide, then another space and bar two modules wide each.

Characters on the right of the center guard bar are color inverted from those on the left. That means every character on the left side starts with a space, while every character on the right starts with a bar.
Why all the complication with two inverted character sets? Direction! The grocery checkout barcode scanner hasn’t changed much over the years. It’s mounted in a slot and items are passed over it. The barcode can be in any orientation, so the scanner has to be able to decode the symbol left to right, right to left, or at nearly any angle.

The important thing to remember is that reading a barcode is that it’s all about relative widths. With a handheld scanner, the barcode can be at any reasonable distance from the scanner. A more distant barcode will appear smaller than a close one. A reader simply has to compare the smallest element it sees (the module) to the width of the other elements. Once these relative widths conform to the rules of the quiet zone and guard bars, the reader decides it has found a possible code and begins to look for characters.

UPC-A may have been the first commonly used barcode, but it didn’t stand alone for long. Europe modified the spec, adding a digit. The resulting code was called EAN-13. EAN codes all include a three digit country code. An odd side effect of this was the creation of the fictional country “Bookland”, which is used for books and other publications.

Today, there are dozens of different barcode symbologies out there. Commonly used linear symbologies include Code 39, Code 128, GS1 DataBar,  Interleaved 2 OF 5, MSI Plessy. When one line isn’t enough, 2D symbologies are used which include PDF-417, Aztec code, Maxicode, Datamatrix, and QR code. We take for granted how easy it is to scan a code and jump to a webpage – but it all started with the simple UPC.

Barcode images from Wikipedia.

Filed under: Featured, misc hacks

Stellarator is Germany’s Devilishly Complex Nuclear Fusion

จันทร์, 11/02/2015 - 23:31

You may not have heard of a Stellarator before, but if all goes well later this month in a small university town in the far northeast of Germany, you will. That’s because the Wendelstein 7-X is finally going to be fired up. If it’s able to hold the heat of a fusion-capable plasma, it could be a huge breakthrough.

So what’s a stellarator? It’s a specific type of nuclear fusion containment geometry that, while devilishly complex to build and maintain, stands a chance at being the first fusion generator to achieve break-even, where the energy extracted from the fusion reaction is greater or equal to the energy used in creating the necessary hot plasma.

There’s an awesome video on the W7-X, and some of the theory behind the reactor just below the break.

Most fusion reactors are tokamaks, which are doughnut-shaped reactors wrapped in superconducting magnets that contain the plasma inside the loop. The problem with tokamaks is that they don’t have an intrinsically uniform magnetic field inside because, put simply, the coils on the hole of the doughnut are closer together than those on the outside. This makes the plasma want to wander outward and hit the walls, which isn’t good.

To compensate, a tokamak induces a circular current within the plasma as well, which pulls it inwards. The problem with this is the heat in the parts that induce the current as well as instabilities in the current as it runs through the plasma itself. This means that tokamaks can’t run for very long: the record is around 6.5 seconds. This makes it very hard for tokamaks to produce more energy than it takes to fire up the plasma in the first place. (Tokamak schematic courtesy Max Planck Institute.)

Stellarators get around this by twisting the path of the plasma. The original stellarator design, invented by all-around physics-badass [Lyman Spitzer], twisted a loop into a figure-8 shape. The point is that the plasma that’s on the inside of the track in one half of the eight is on the outside in the other half, and there’s no need for the tokamak’s toroidal plasma current. The W7-X twists the plasma band five times as it moves around what’s essentially a circle.

And this means that there’s a chance of containing a plasma for a much longer time in a stellarator, and eventually breaking even on the startup energy. The only problem is that making one is devilishly difficult, even with the benefit of a billion Euros and the best German engineering. Just looking at the number of access ports on the W7-X gives you a bit of insight into how incredibly complex this machine is. 

According to this coverage in Science magazine, the W7-X project was given the green light in 1994, and was due to be completed by 2006 at the cost of €550 million. But the reactor involves 425 tons of specifically-shaped superconducting magnets and the subsequent cooling apparatus. They ran into manufacturing troubles with the magnets, with a third of them failing inspection and one of the firms producing them going bankrupt.

Not that delays or cost overruns are uncommon in fusion reactors — the ultra-large ITER tokamak is probably going to cost around thirteen billion Euros (three times the original budget) and be eleven years late by the time it’s firing a plasma in 2020. Creating stable nuclear fusion just isn’t easy.

But the W7-X looks to be ready for a real plasma firing any day now. They’ve already pre-tested the containment on (cold) electrons, and everything’s reported to be perfect. If they can keep a hot plasma stable for a long(er) time, it may pave the way to actual fusion power generation. Keep your fingers crossed!

Filed under: green hacks, news

Making the Case For Nuclear Aircraft

จันทร์, 11/02/2015 - 21:00

At any given moment, several of the US Navy’s Nimitz class aircraft carriers are sailing the world’s oceans. Weighing in at 90 thousand tons, these massive vessels need a lot of power to get moving. One would think this power requires a lot of fuel which would limit their range, but this is not the case. Their range is virtually unlimited, and they only need refueling every 25 years. What kind of technology allows for this? The answer is miniaturized nuclear power plants. Nimitz class carriers have two of them, and they are pretty much identical to the much larger power plants that make electricity. If we can make them small enough for ships, can we make them small enough for other things, like airplanes?

Nuclear Power 101

Nuclear reactors use the controlled splitting of uranium atoms to produce energy. This energy is transferred into water as heat. The water is kept under high pressure, which keeps it from turning into steam, and allows it to become super heated. The super heated water is moved to a heat exchanger, where it heats another source of water to produce steam. This second heat exchanger not only allows for transfer of energy, but also isolates the radioactivity from the rest of the system. The non-radioactive steam from the second heat exchanger is then used to turn a turbine that produces electricity. The steam eventually heads to a condenser, where it turns back into liquid water and is moved to the second heat exchanger.

So now we know how nuclear power works, we get to work on the fun stuff! Our job is to discuss how we can make it really really small and cram it into an airplane. And then examine the consequences of such technology.

The Heat Transfer Reactor HTRE-3

Jet engines use spinning blades to compress air inside a combustion chamber. The compressed air is then sprayed with a fossil fuel and ignited with a spark. The resulting release of energy is expelled from the engine, producing thrust. A nuclear powered jet engine is pretty much the same minus the combustion chamber. Air is compressed and sent to a plenum. A nuclear reactor heats the air to a very high temperature. The super heated air is then sent to a turbine where it produces thrust.

The US government built a nuclear powered jet engine in the 1950s. It was called the Heat Transfer Reactor Experiment – 3, or HTRE-3 for short. It used liquid salt as opposed to water for the heat exchanger. The salt could get much hotter and was more efficient in the transfer of energy to the air.

The idea behind the nuclear powered jet is similar to the nuclear powered ship – no need to stop for gas. A nuclear powered jet would have an unlimited range. However, the advent of the ICBM made such an aircraft obsolete for military purposes.

What About Commercial Jets? Nuclear Jet Engine

While it might not be practical to make a nuclear powered jet to drop bombs on people, what about for transporting people?  With today’s technology, is it possible to build a small, safe nuclear jet engine that could be used on a commercial airliner? It’s difficult to imagine that it would not be. Material science is much more advanced today than it was in the 1950s. We have the ability to perfect a smaller nuclear engine. The question is – why aren’t we?


The first thing that comes to mind is what happens if there is a catastrophic incident that causes the plane to break apart mid flight. How do you contain radioactive material during such a disaster? Let us rewind to the fall of 1997 when the Cassini spacecraft was getting ready to launch. At its heart was 37.2 kilograms of plutonium 238. You can imagine the controversy with strapping radioactive material atop a giant controlled explosion.  NASA ensured that if the rocket exploded, the radioactive materiel would be contained. Cassini of course went on to be a very successful mission that unlocked many of Saturn’s secrets. We were able to mitigate the risks and dangers with Cassini. We should be able to do the same with a much more stable flight vehicle – a jet plane.

It’s Not Weapons Grade

The second thing that comes to mind is somebody stealing one and making a bomb. This is where we need to point out the difference between weapons grade and reactor grade uranium. You might have heard these terms used in the news recently, as they are key points to the Iran nuclear deal. Natural uranium is roughly 99% U238 and 1% U235. The U238 isotope will not work for nuclear power or bombs. You need a higher concentration of the U235 isotope. A much higher concentration of U235 is needed for a bomb, while a reactor requires a much lower concentration. The process of concentrating the U235 isotope is known as enriching.

Enriching uranium is difficult to do because the properties of the U235 and U238 isotopes are very similar. The most common technique separates the two by weight using high-tech centrifuges. This is the sticking point with the Iran deal. Nobody cares if they have reactor grade uranium. The deal is to prevent them from making centrifuges capable of separating enough U235 out of it to make a bomb.

So in short – you don’t have to worry about someone making a bomb out of the uranium from a nuclear powered jet engine.

Impact of Nuclear Powered Commercial Jets

Let us close with thinking about the impact of a safe nuclear powered jet on society. Moving away from fossil fuels is a popular movement. If we could convince everyone that it’s safe, it would have great support from many national leaders. Also, fuel cost is one of the airline industry’s greatest expenses. A plane that doesn’t require fuel would be a huge cost saver, allowing for cheaper plane tickets for all of us.

Now it’s your turn. Nuclear Powered Commercial Jet – Yes or No?

Filed under: Featured, transportation hacks

Lithium-Air Might Be The Better Battery

จันทร์, 11/02/2015 - 18:01

Researchers at Cambridge University demonstrated their latest version of what is being called the Lithium-Air battery. It can be more accurately referred to as a Lithium-Oxygen but Air sounds cooler.

The early estimates look pretty impressive with the energy density being 93% efficient which could be up to 10 times the energy density of Lithium-Ion and claims to be rechargeable up to 2,000 times.  Recent improvements toward Lithium-Air batteries include a graphene contact and using lithium hydroxide in place of lithium peroxide which increased both stability and efficiency.

Here’s the rub: Lithium-Air batteries are still years away from being ready for commercial use. There are still problems with the battery’s ability to charge and discharge (kind of a deal breaker if the battery won’t charge or discharge right?) There are still issues with safety, performance, efficiency, and the all too apparent need for pure oxygen.

Do batteries get you all charged up? Check out our coverage of MIT’s solid state battery research, or have a look at the Nissan Leaf and/or Tesla battery packs.

Thanks to [Jimmy] for the tip.

Filed under: car hacks, news