Hackaday

Syndicate content Hackaday
Fresh hacks every day
ถูกปรับปรุง 57 min 41 sec ก่อน

Neural Networks: You’ve Got It So Easy

จันทร์, 04/24/2017 - 21:01

Neural networks are all the rage right now with increasing numbers of hackers, students, researchers, and businesses getting involved. The last resurgence was in the 80s and 90s, when there was little or no World Wide Web and few neural network tools. The current resurgence started around 2006. From a hacker’s perspective, what tools and other resources were available back then, what’s available now, and what should we expect for the future? For myself, a GPU on the Raspberry Pi would be nice.

The 80s and 90s Neural network 80s/90s books and mags

For the young’uns reading this who wonder how us old geezers managed to do anything before the World Wide Web, hardcopy magazines played a big part in making us aware of new things. And so it was Scientific American magazine’s September 1992 special issue on Mind and Brain that introduced me to neural networks, both the biological and artificial kinds.

Back then you had the option of writing your own neural networks from scratch or ordering source code from someone else, which you’d receive on a floppy diskette in the mail. I even ordered a floppy from The Amateur Scientist column of that Scientific American issue. You could also buy a neural network library that would do all the low-level, complex math for you.  There was also a free simulator called Xerion from the University of Toronto.

Keeping an eye on the bookstore Science sections did turn up the occasional book on the subject. The classic was the two-volume Explorations in Parallel Distributed Processing, by Rumelhart, McClelland et al. A favorite of mine was Neural Computation and Self-Organizing Maps: An Introduction, useful if you were interested in neural networks controlling a robot arm.

There were also short courses and conferences you could attend. The conference I attended in 1994 was a free two-day one put on by Geoffrey Hinton, then of the University of Toronto, both then and now a leader in the field. The best reputed annual conference at the time was the Neural Information Processing System conference, still going strong today.

And lastly, I recall combing the libraries for published papers. My stack of conference papers and course handouts, photocopied articles, and handwritten notes from that period is around 3″ thick.

Then things went relatively quiet. While neural networks had found use in a few applications, they hadn’t lived up to their hype and from the perspective of the world, outside of a limited research community, they ceased to matter. Things remained quiet as gradual improvements were made, along with a few breakthroughs, and then finally around 2006 they exploded on the world again.

The Present Arrives

We’re focusing on tools here but briefly, those breakthroughs were mainly:

  • new techniques for training networks that go more than three or four layers deep, now called deep neural networks
  • the use of GPUs (Graphics Processing Units) to speed up training
  • the availability of training data containing large numbers of samples
Neural Network Frameworks

There are now numerous neural network libraries, usually called frameworks, available for download for free with various licenses, many of them open source frameworks. Most of the more popular ones allow you to run your neural networks on GPUs, and are flexible enough to support most types of networks.

Here are most of the more popular ones. They all have GPU support except for FNN.

TensorFlow

Languages: Python, C++ is in the works

TensorFlow is Google’s latest neural network framework. It’s designed for distributing networks across multiple machines and GPUs. It can be considered a low-level one, offering great flexibility but also a larger learning curve than high-level ones like Keras and TFLearn, both talked about below. However, they are working on producing a version of Keras integrated in TensorFlow.

We’ve seen this one in a hack on Hackaday already in this hammer and beer bottle recognizing robot and even have an introduction to using TensorFlow.

Theano

Languages: Python

This is an open source library for doing efficient numerical computations involving multi-dimensional arrays. It’s from the University of Montreal, and runs on Windows, Linux and OS-X. Theano has been around for a long time, 0.1 having been released in 2009.

Caffe

Languages: Command line, Python, and MATLAB

Caffe is developed by Berkeley AI Research and community contributors. Models can be defined in a plain text file and then processed using a command line tool. There are also Python and MATLAB interfaces. For example, you can define your model in a plain text file, give details on how to train it in a second plain text file called a solver, and then pass these to the caffe command line tool which will then train a neural network. You can then load this trained net using a Python program and use it to do something, image classification for example.

CNTK

Languages: Python, C++, C#

This is the Microsoft Cognitive Toolkit (CNTK) and runs on Windows and Linux. They’re currently working on a version to be used with Keras.

Keras

Languages: Python

Written in Python, Keras uses either TensorFlow or Theano underneath, making it easier to use those frameworks. There are also plans to support CNTK as well. Work is underway to integrate Keras into TensorFlow resulting in a separate TensorFlow-only version of Keras.

TF Learn

Languages: Python

Like Keras, this is a high-level library built on top of TensorFlow.

FANN

Languages: Supports over 15 languages, no GPU support

This is a high-level open source library written in C. It’s limited to fully connected and sparsely connected neural networks. However, it’s been popular over the years, and has even been included in Linux distributions. It’s recently shown up here on Hackaday in a robot that learned to walk using reinforcement learning, a machine learning technique that often makes use of neural networks.

Torch

Languages: Lua

Open source library written in C. Interestingly, they say on the front page of their website that Torch is embeddable, with ports to iOS, Andoid and FPGA backends.

PyTorch

Languages: Python

PyTorch is relatively new, their website says it’s in early-release beta, but there seems to be a lot interest in it. It runs on Linux and OS-X and uses Torch underneath.

There are no doubt others that I’ve missed. If you have a particular favorite that’s not here then please let us know in the comments.

Which one should you use? Unless the programming language or OS is an issue then another factor to keep in mind is your skill level. If you’re uncomfortable with math or don’t want to dig deeply into the neural network’s nuances then chose a high-level one. In that case, stay away from TensorFlow, where you have to learn more about the API than Kera, TFLearn or the other high-level ones. Frameworks that emphasize their math functionality usually require you to do more work to create the network. Another factor is whether or not you’ll be doing basic research. A high-level framework may not allow you to access the innards enough to start making crazy networks, perhaps with connections spanning multiple layers or within layers, and with data flowing in all directions.

Online Services

Are you you’re looking to add something a neural network would offer to your hack but don’t want to take the time to learn the intricacies of neural networks? For that there are services available by connecting your hack to the internet.

We’ve seen countless examples making use of Amazon’s Alexa for voice recognition. Google also has its Cloud Machine Learning Services which includes vision and speech. Its vision service have shown up here using Raspberry Pi’s for candy sorting and reading human emotions. The Wekinator is aimed at artists and musicians that we’ve seen used to train a neural network to respond to various gestures for turning things on an off around the house, as well as for making a virtual world’s tiniest violin. Not to be left out, Microsoft also has its Cognitive Services APIs, including: vision, speech, language and others.

GPUs and TPUs Iterating through a neural network

Training a neural network requires iterating through the neural network, forward and then backward, each time improving the network’s accuracy. Up to a point, the more iterations you can do, the better the final accuracy will be when you stop. The number of iterations could be in the hundreds or even thousands. With 1980s and 1990s computers, achieving enough iterations could take an unacceptable amount of time. According to the article, Deep Learning in Neural Networks: An Overview, in 2004 an increase of 20 times the speed was achieved with a GPU for a fully connected neural network. In 2006 a 4 times increase was achieved for a convolutional neural network. By 2010, increases were as much as 50 times faster when comparing training on a CPU versus a GPU. As a result, accuracies were much higher.

Nvidia Titan Xp graphics card. Image Credit: Nvidia

How do GPUs help? A big part of training a neural network involves doing matrix multiplication, something which is done much faster on a GPU than on a CPU. Nvidia, a leader in making graphics cards and GPUs, created an API called CUDA which is used by neural network software to make use of the GPU. We point this out because you’ll see the term CUDA a lot. With the spread of deep learning, Nvidia has added more APIs, including CuDNN (CUDA for Deep Neural Networks), a library of finely tuned neural network primitives, and another term you’ll see.

Nvidia also has its own single board computer, the Jetson TX2, designed to be the brains for self-driving cars, selfie-snapping drones, and so on. However, as our [Brian Benchoff] has pointed out, the price point is a little high for the average hacker.

Google has also been working on its own hardware acceleration in the form of its Tensor Processing Unit (TPU). You might have noticed the similarity to the name of Google’s framework above, TensorFlow. TensorFlow makes heavy use of tensors (think of single and multi-dimensional arrays in software). According to Google’s paper on the TPU it’s designed for the inference phase of neural networks. Inference refers not to training neural networks but to using the neural network after it’s been trained. We haven’t seen it used by any frameworks yet, but it’s something to keep in mind.

Using Other People’s Hardware

Do you have a neural network that’ll take a long time to train but don’t have a supported GPU, or don’t want to tie up your resources? In that case there’s hardware you can use on other machines accessible over the internet. One such is FloydHub which, for an individual, costs only penny’s per hour with no monthly payment. Another is Amazon EC2.

Datasets Training neural network with labeled data

We said that one of the breakthroughs in neural networks was the availability of training data containing large numbers of samples, in the tens of thousands. Training a neural network using a supervised training algorithm involves giving the data to the network at its inputs but also telling it what the expected output should be. In that case the data also has to be labeled. If you give an image of a horse to the network’s inputs, and its outputs say it looks like a cheetah, then it needs to know that the error is large and more training is needed. The expected output is called a label, and the data is ‘labeled data’.

Many such datasets are available online for training purposes. MNIST is one such for handwritten character recognition. ImageNet and CIFAR are two different datasets of labeled images. Many more are listed on this Wikipedia page. Many of the frameworks listed above have tutorials that include the necessary datasets.

That’s not to say you absolutely need a large dataset to get a respectable accuracy. The walking robot we previously mentioned that used the FNN framework, used the servo motor positions as its training data.

Other Resources

Unlike in the 80s and 90s, while you can still buy hardcopy books about neural networks, there are numerous ones online. Two online books I’ve enjoyed are Deep Learning by the MIT Press and Neural Networks and Deep Learning. The above listed frameworks all have tutorials to help get started. And then there are countless other websites and YouTube videos on any topic you search for. I find YouTube videos of recorded lectures and conference talks very useful.

The Future Raspberry Pi 7 with GPU

Doubtless the future will see more frameworks coming along.

We’ve long seen specialized neural chips and boards on the market but none have ever found a big market, even back in the 90s. However, those aren’t designed specially for serving the real growth area, the neural network software that everyone’s working on. GPUs do serve that market. As neural networks with millions of connections for image and voice processing, language, and so on make their way into smaller and smaller consumer devices the need for more GPUs or processors tailored to that software will hopefully result in something that can become a new component on a Raspberry Pi or Arduino board. Though there is the possibility that processing will remain an online service instead. EDIT: It turns out there is a GPU on the Raspberry Pi — see the comments below. That doesn’t mean all the above frameworks will make use of it though. For example, TensorFlow supports Nvidia CUDA cards only. But you can still use the GPU for your own custom neural network code. Various links are in the comments for that too.

There is already competition for GPUs from ASICs like the TPU and it’s possible we’ll see more of those, possibly ousting GPUs from neural networks altogether.

As for our new computer overlord, neural networks as a part of our daily life are here to stay this time, but the hype that is artificial general intelligence will likely quieten until someone makes significant breaktroughs only to explode onto the scene once again, but for real this time.

In the meantime, which neural network framework have you used and why? Or did you write your own? Are there any tools missing that you’d like to see? Let us know in the comments below.


Filed under: Featured, Interest, slider, software hacks

White-hat Botnet Infects, Then Secures IoT Devices

จันทร์, 04/24/2017 - 18:00

[Symantec] Reports Hajime seems to be a white hat worm that spreads over telnet in order to secure IoT devices instead of actually doing anything malicious.

[Brian Benchoff] wrote a great article about the Hajime Worm just as the story broke when first discovered back in October last year. At the time, it looked like the beginnings of a malicious IoT botnet out to cause some DDoS trouble. In a crazy turn of events, it now seems that the worm is actually securing devices affected by another major IoT botnet, dubbed Mirai, which has been launching DDoS attacks. More recently a new Mirai variant has been launching application-layer attacks since it’s source code was uploaded to a GitHub account and adapted.

Hajime is a much more complex botnet than Mirai as it is controlled through peer-to-peer propagating commands through infected devices, whilst the latter uses hard-coded addresses for the command and control of the botnet. Hajime can also cloak its self better, managing to hide its self from running processes and hide its files from the device.

The author can open a shell script to any infected machine in the network at any time, and the code is modular, so new capabilities can be added on the fly. It is apparent from the code that a fair amount of development time went into designing this worm.

So where is this all going? So far this is beginning to look like a cyber battle of Good vs Evil. Or it’s a turf war between rival cyber-mafias. Only time will tell.


Filed under: news, security hacks

Laser Surgery: Expanding the Bed of a Cheap Chinese Laser Cutter

จันทร์, 04/24/2017 - 15:00

Don’t you just hate it when you spend less than $400 on a 40-watt laser cutter and it turns out to have a work area the size of a sheet of copy paper? [Kostas Filosofou] sure did, but rather than stick with that limited work envelope, he modified his cheap K40 laser cutter so it has almost five times the original space.

The K40 doesn’t make any pretenses — it’s a cheap laser cutter and engraver from China. But with new units going for $344 on eBay now, it’s almost a no-brainer. Even with its limitations, you’re still getting a 40-watt CO2 laser and decent motion control hardware to play with. [Kostas] began the embiggening by removing the high-voltage power supply from its original space-hogging home to the right of the work area. With that living in a new outboard enclosure, a new X-Y gantry of extruded aluminum rails and 3D-printed parts was built, and a better exhaust fan was installed. Custom mirror assemblies were turned, better fans were added to the radiator, and oh yeah — he added a Z-axis to the bed too.

We’re sure [Kostas] ran the tab up a little on this build, but when you’re spending so little to start with, it’s easy to get carried away. Speaking of which, if you feel the need for an even bigger cutter, an enormous 100-watt unit might be more your style.

Thanks to stalwart tipster [George Graves] for the heads up on this one.


Filed under: laser hacks, tool hacks

Arbitrary Code Execution is in Another Castle!

จันทร์, 04/24/2017 - 12:00

When one buys a computer, it should be expected that the owner can run any code on it that they want. Often this isn’t the case, though, as most modern devices are sold with locked bootloaders or worse. Older technology is a little bit easier to handle, however, but arbitrary code execution on something like an original Nintendo still involves quite a lot of legwork, as [Retro Game Mechanics Explained] shows with the inner workings of Super Mario Brothers 3.

While this hack doesn’t permanently modify the Nintendo itself, it does allow for arbitrary code execution within the game, which is used mostly by speedrunners to get to the end credits scene as fast as possible. To do this, values are written to memory by carefully manipulating on-screen objects. Once the correct values are entered, a glitch in the game involving a pipe is exploited to execute the manipulated memory as an instruction. The instruction planted is most often used to load the Princess’s chamber and complete the game, with the current record hovering around the three-minute mark.

If you feel like you’ve seen something like this before, you are likely thinking of the Super Mario World exploit for the SNES that allows for the same style of arbitrary code execution. The Mario 3 hack, however, is simpler to execute. It’s also worth checking out the video below, because [Retro Game Mechanics Explained] goes into great depth about which values are written to memory, how they are executed as an instruction, and all of the other inner workings of the game that allows for an exploit of this level.


Filed under: nintendo hacks

ESP32’s Dev Framework Reaches 2.0

จันทร์, 04/24/2017 - 09:00

We’ve been watching the development of the ESP32 chip for the last year, but honestly we’ve been a little bit cautious to throw all of our friendly ESP8266s away just yet. Earlier this month, Espressif released version 2.0 of their IoT Development Framework (ESP-IDF), and if you haven’t been following along, you’ve missed a lot.

We last took a serious look at the IDF when the chips were brand-new, and the framework was still taking its first baby steps. There was no support for such niceties as I2C and such at the time, but you could get both cores up and running and the thing connected to the network. We wanted to test out the power-save modes, but that wasn’t implemented yet either. In short, we were watching the construction of a firmware skyscraper from day one, and only the foundation had been poured.

But what a difference eight months make! Look through the GitHub changes log for the release, and it’s a totally new ballgame. Not only are their drivers for I2C, I2S, SPI, the DAC and ADCs, etc, but there are working examples and documentation for all of the above. Naturally, there are a ton of bugfixes as well, especially in the complex WiFi and Bluetooth Low Energy stacks. There’s still work left to do, naturally, but Espressif seems to think that the framework is now mature enough that they’ve opened up their security bug bounty program on the chip. Time to get hacking!

If you want quickstart instructions, the files you’re looking for are in the documentation folder at GitHub. If you’re trying to manage a previous install of the ESP-IDF, you’ll note that V2.0 requires a newer version of the GCC compiler, so there’s a little work to do if you want to make it coexist with a V1.0 install. But we think you can handle it.


Filed under: Microcontrollers, news, slider

Hackaday Links: April 23, 2017

จันทร์, 04/24/2017 - 06:00

‘Member StarCraft? Ooooh, I ‘member StarCraft. The original game and the Brood War expansion are now free. A new patch fixes most of the problems of getting a 20-year-old game working and vastly improves playing over LAN (‘member when you could play video games over a LAN?) And you thought you were going to have free time this week.

About a year ago, [Mark Chepurny] built a dust boot for his Shapeoko CNC router. The SuckIt (not the best possible name, by the way) is an easy, simple way to add dust collection to an X-Carve or Shapeoko 2. The folks at Inventables reached out to [Mark] and made a few improvements. Now, the renamed X-Carve Dust Control System. It’s a proper vacuum attachment for the X-Carve with grounding and a neat brush shoe.

I don’t know if this is a joke or not. It’s certainly possible, but I seriously doubt anyone would have the patience to turn PowerPoint into a Turing Machine. That’s what [Tom Wildenhain] did for a lightning talk at SIGBOVIK 2017 at CMU. There’s a paper (PDF), and the actual PowerPoint / Turing Machine file is available.

System76 builds computers. Their focus is on computers that run Linux well, and they’ve garnered a following in the Open Source world. System76 is moving manufacturing in-house. Previously, they’ve outsourced their design and hardware work to outside companies. They’re going to work on desktops first (laptops are much harder and will come later), but with any luck, we’ll see a good, serviceable, Open laptop in a few year’s time.

Remember last week when a company tried to trademark the word ‘makerspace’? That company quickly came to their senses after some feedback from the community. That’s not all, because they also had a trademark application for the word ‘FabLab’. No worries, because this was also sorted out in short order.


Filed under: Hackaday Columns, Hackaday links

Hackaday Prize Entry: Memes

จันทร์, 04/24/2017 - 03:00

Snap, Inc., the company behind Snapchat, is branding itself as a hardware company. What hardware does Snap make? Spectacles, or a camera attached to a pair of sunglasses. Snap, Inc. has a market value of around $30 Billion USD.

For his Hackaday Prize entry, [William Glover] is building a device that’s easily worth $100 Billion. It’s called SnappCat, and it’s a machine learning, AI, augmented reality, buzzword-laden camera that adds memes to pictures of cats. Better get in on the Series A now because this is 

This Hacker Fit An Entire RetroPie In An Altoids Tin

จันทร์, 04/24/2017 - 00:00

A few months ago, [wermy] built the mintyPi, a Raspberry Pi-based gaming console that fits inside an Altoids tin. It’s amazing — there’s a composite LCD, an audio DAC, and a chopped up Nintendo controller all connected to a Raspberry Pi for vintage gaming goodness on the road. Now, there’s a new mintyPi. The mintyPi 2.0 vastly improves over the earlier generation of this groundbreaking mint-based gaming console with a better screen, better buttons, customized 3D printed bezels, and better audio. Truly, we live in a Golden Age.

Version two of mintyPi uses 3D printed parts and includes a real hinge to keep the display propped up when the Altoids tin is open. Instead of a DAC-based audio solution, [wermy] is using a USB sound card for clearer, crisper sound. This version also uses the new, wireless version of the Raspberry Pi Zero. The Raspberry Pi Zero W allows this Altoids tin to connect to the Internet or, alternatively, gives the user the ability to dump ROMs on this thing without having to connect it to a computer.

For the software, this retro Altoids video game machine is running RetroPie, a very popular way to get retro video games running under low-power Linux machines. Everything is in there, from the NES to Amstrad to the Sega Master system.

Right now, there aren’t a whole lot of details on how [wermy] created the mintyPi 2.0, but he promises a guide soon. Until then, we’ll just have to drool over the video embedded below.


Filed under: Raspberry Pi

The Raspberry Pi As An IR To WiFi Bridge

อาทิตย์, 04/23/2017 - 21:00

[Jason] has a Sonos home sound system, with a bunch of speakers connected via WiFi. [Jason] also has a universal remote designed and manufactured in a universe where WiFi doesn’t exist. The Sonos can not be controlled via infrared. There’s an obvious problem here, but luckily tiny Linux computers with WiFi cost $10, and IR receivers cost $2. The result is an IR to WiFi bridge to control all those ‘smart’ home audio solutions.

The only thing [Jason] needed to control his Sonos from a universal remote is an IR receiver and a Raspberry Pi Zero W. The circuit is simple – just connect the power and ground of the IR receiver to the Pi, and plug the third pin of the receiver into a GPIO pin. The new, fancy official Raspberry Pi Zero enclosure is perfect for this build, allowing a little IR-transparent piece of epoxy poking out of a hole designed for the Pi camera.

For the software, [Jason] turned to Node JS, and LIRC, a piece of software that decodes IR signals. With the GPIO pin defined, [Jason] set up the driver and used the Sonos HTTP API to send commands to his audio unit. There’s a lot of futzing about with text files for this build, but the results speak for themselves: [Jason] can now use a universal remote with everything in his home stereo now.


Filed under: home entertainment hacks, Raspberry Pi

Gawkerbot is Watching You

อาทิตย์, 04/23/2017 - 18:00

While sick with the flu a few months ago, [CroMagnon] had a vision. A face with eyes that would follow you – no matter where you walked in the room. He brought this vision to life in the form of Gawkerbot. This is no static piece of art. Gawkerbot’s eyes slowly follow you as you walk through its field of vision. Once the robot has fixed its gaze upon you, the eyes glow blue. It makes one wonder if this is an art piece, of if the rest of the robot is about to pop through the wall and attack.

Gawkerbot’s sensing system is rather simple. A PIR sensor detects motion in the room. If any motion is detected, two ultrasonic sensors which make up the robots pupils start taking data. Code running on an ATmega328 determines if a person is detected on the left or right, and moves the eyes appropriately.

[CroMagnon] used an old CD-ROM drive optics sled to move Gawkerbot’s eyes. While the motor is small, the worm drive has plenty of power to move the 3D-printed eyes and linkages. Gawkerbot’s main face is a 3D-printed version of a firefighters smoke helmet.

The ultrasonic sensors work, but it took quite a bit of software to tame the jitters noisy data stream. [CroMagnon] is thinking of using PIR sensors on Gawkerbot 2.0. Ultrasonic transducers aren’t just for sensing. Given enough power, you can solder with them. Ultrasonics even work for wireless communications.

Check out the video after the break to see Gawkerbot in action.


Filed under: 3d Printer hacks

A Cool Mist that Dries Your Clothes

อาทิตย์, 04/23/2017 - 15:00

This one is both wild enough to be confused as a conspiracy theory and common sense enough to be the big solution staring us in the face which nobody realized. Until now. Oak Ridge National Laboratory and General Electric (GE), working on a grant from the US Department of Energy (DOE), have been playing around with new clothes dryer technology since 2014 and have come with something new and exciting. Clothes dryers that use ultrasonic traducers to remove moisture from garments instead of using heat.

If you’ve ever seen a cool mist humidifier you’ll know how this works. A piezo element generates ultrasonic waves that atomize water and humidify the air. This is exactly the same except the water is stored in clothing, rather than a reservoir. Once it’s atomized it can be removed with traditional air movement.

This is a totally obvious application of the simple and inexpensive technology — when the garment is laying flat on a bed of transducers. This can be implemented in a press drying system where a garment is laid flat on a bed or transducers and another bed hinges down from above. Poof, your shirt is dry in a few seconds.

But individual households don’t have these kinds of dryers. They have what are called drum dryers that spin the clothes. Reading closely, this piece of the puzzle is still to come:

They play [sic] to scale-up the technoloogy to press drying and eventually a clothes dryer drum in the next five months.

We look at this as having a similar technological hurdle as wireless electricity. There must be an inverse-square law on the effect of the ultrasonic waves to atomize water as the water moves further away from the transducers. It that’s the case, tranducers on the circumference of a drum would be inefficient at drying the clothing toward the center. This slide deck hints that that problem is being addressed. It talks about only running the transducers when the fabric is physically coupled with the elements. It’s an interesting application and we hope that it could work in conjunction with traditional drying methods to boost energy savings, even if this doesn’t pan out as a total replacement.

With a vast population, cost adds up fast. There are roughly 125 M households in the United States and the overwhelming majority of them use clothes dryers (while many other parts of the world have a higher percentage who hang-dry their clothing). The DOE estimates $9 billion a year is spent on drying clothes in the US. Reducing that number by even 1/10th of 1% will pay off more than tenfold the $880,000 research budget that went into this. Of course, you have to outfit those households with new equipment which will take at least 8-12 years through natural attrition, even if ultrasonics hit the market as soon as possible.

There is a lot of room for new ideas on saving energy and resources while washing and drying clothes. Working on this challenge would make a great Hackaday Prize entry. As it so happens, just last year we saw a method that leveraged arid desert air as a heat source for drying.

[via reddit via Yahoo Finance]


Filed under: green hacks, home hacks

Retrofitting An Amstrad CPC6128 With A Floppy Emulator

อาทิตย์, 04/23/2017 - 12:00

In the home computer boom of 1980s Britain, you could describe Amstrad as the third-placed home-grown player after Sinclair and Acorn. If you were a computer enthusiast kid rather than a gamer kid, you wanted Acorn’s BBC Micro, your parents bought you Sinclair’s ZX Spectrum because it was cheaper, and you thought the Amstrads were cool because they came with a better monitor than your family’s cast-off 1970s TV.

Amstrad were not a computer company headed by a technical wizard, instead they were a consumer electronics company whose founder [Alan Sugar] had a keen nose for the preferences of the consumer. Thus the Amstrad machines were different from some of their competitors: they were more polished, more appliances than experimental tools. Mass storage devices such as tape decks and floppy drives were built-in, every Amstrad came with its own dedicated monitor, and keyboards were decent quality as you’d see on a “proper” computer.

The high-end Amstrad model was the CPC6128. It came with a 3″ floppy drive, and of most interest, it could run the CP/M operating system. If your parents bought you an Amstrad CPC as a 1980s teen, it wouldn’t have been this one, so they are considerably less common than their 64k brethren with the cassette deck. One has found its way into [Drygol]’s hands though, and because the vintage 3″ floppies are unobtainable nowadays he’s fitted a floppy emulator board that stores data on an SD card.

In a sense, in that this is simply the fitting of an off-the-shelf board to a computer, it’s Not A Hack. But misses the point. This is an unusual home computer from the 8-bit era and his write-up is as much a teardown as it is  a howto. We don’t often get to see inside a 6128.

Fitting the board required the fabrication of a cable, with some very neat soldering work. The board has an LCD display, which is mounted in the floppy opening with a 3D printed bezel. The result is a very usable retro computer, without too much in the way of wanton remodeling.

This is probably the first real Amstrad 6128 we’ve shown you, but that hasn’t stopped enthusiasts making a clone with original chips, and another on an FPGA.

 


Filed under: computer hacks

Ultrasonic Raspberry Pi Piano

อาทิตย์, 04/23/2017 - 09:00

Cheap stuff gets our creative juices flowing. Case in point? [Andy Grove] built an eight-sensor HC-SR04 breakout board, because the ultrasonic distance sensors in question are so affordable that a hacker can hardly avoid ordering them by the dozen. He originally built it for robotics, but then it’s just a few lines of code to turn it into a gesture-controllable musical instrument. Check out the video, embedded below, for an overview of the features.

His Octasonic breakout board is just an AVR in disguise — it reads from eight ultrasonic sensors and delivers a single SPI result to whatever other controller is serving as the brains. In the “piano” demo, that’s a Raspberry Pi, so he needed the usual 5 V to 3.3 V level shifting in between.

The rest is code on the Pi that enables gestures to play notes, change musical instruments, and even shut the Pi down. The Pi code is written in Rust, and up on GitHub. An Instructable has more detail on the hookups.

All in all, building a “piano” out of robot parts is surely a case of having a hammer and every problem looking like a nail, but we find some of the resulting nail-sculptures arise that way. This isn’t the first time we’ve seen an eight-sensor ultrasonic setup before, either. Is 2017 going to be the year of ultrasonic sensor projects?


Filed under: musical hacks, Raspberry Pi

How Many Watts Are You Using?

อาทิตย์, 04/23/2017 - 06:00

One of the best smart home hacks is implementing an energy monitor of some kind. It’s easy enough to say that you’re trying to save energy, but without the cold hard data, it’s just talk. Plus, it’s easy and a great way to build up something DIY that the whole family can use.

[Bogdan] built up a simple whole-apartment power monitor from scratch over the weekend, and he’s been nice enough to walk us through the whole procedure, starting with picking up a split-core CT sensor and ending up with a finished project.

The brains of his project are an ESP8266 module, which means that he needed to adapt the CT sensor to put out a voltage that lies within the chip’s ADC range of 0 V to 3.3 V. If you’re undertaking an energy monitor project, it’s as easy as picking the right burden resistor value and then shifting the ground-centered voltage up by 1.6 V or so. We say it’s easy, but it’s nice to have a worked example and some scope shots. The microcontroller reads the ADC frequently, does a little math, and you’re done.

The rest of the code was borrowed from here or there. EmonLib takes care of the math, ArduinoOTA allows him to reflash the firmware over the air, and Blynk takes care of making a nice Android app for visualization. In the end, a nice dot-matrix LED display lets [Bogdan] obsess about every last Watt in his living room. Adding a second ADC channel to the ESP8266 so that he can get a bit more accuracy out by measuring the instantaneous voltage is probably a project for next weekend.


Filed under: home hacks

Hackaday Prize Entry: A Complete Suite Of Biomedical Sensors

อาทิตย์, 04/23/2017 - 03:00

The human body has a lot to tell us if we only have the instruments to listen. Unfortunately, most of the diagnostic gear used by practitioners is pricey stuff that’s out of range if you just want to take a casual look under the hood. For that task, this full-featured biomedical sensor suite might come in handy.

More of an enabling platform than a complete project, [Orlando Hoilett]’s shield design incorporates a lot of the sensors we’ve seen before. The two main modalities are photoplethysmography, which uses the MAX30101 to sense changes in blood volume and oxygen saturation by differential absorption and reflection of light, and biopotential measurements using an instrumentation amplifier built around an AD8227 to provide all the “electro-whatever-grams” you could need: electrocardiogram, electromyogram, and even an electrooculogram to record eye movements. [Orlando] has even thrown on temperature and light sensors for environmental monitoring.

[Orlando] is quick to point out that this is an educational project and not a medical instrument, and that it should only ever be used completely untethered from mains — battery power and Bluetooth only, please. Want to know why? Check out the shocking truth about transformerless power supplies.

The HackadayPrize2017 is Sponsored by:

Thanks to [fustini] for the tip.


Filed under: Medical hacks, The Hackaday Prize

X-Ray Imaging Camera Lens Persuaded to Join Micro Four Thirds Camera

อาทิตย์, 04/23/2017 - 00:01

Anyone who is into photography knows that the lenses are the most expensive part in the bag. The larger the aperture or f-stop of the lens, the more light is coming in which is better for dimly lit scenes. Consequently, the price of the larger glass can burn a hole in one’s pocket. [Anthony Kouttron] decided that he could use a Rodenstock TV-Heligon lens he found online and adapt it for his micro four-third’s camera.

The lens came attached to a Fischer Imaging TV camera which was supposedly part of the Fluorotron line of systems used for X-ray imaging. We find [Anthony’s] exploration of the equipment, and discovery of previous hacks by unknown owners, to be entertaining. Even before he begins machining the parts for his own purposes, this is an epic teardown he’s published.

Since the lens was originally mounted on a brass part, [Anthony Kouttron] knew that it would be rather easy to machine the custom part to fit standardized lens adapters. He describes in detail the process for cleaning out the original mount by sanding, machining and threading it. Along the way you’ll enjoy his tips on dealing with a part that, instead of being a perfect circle on the outside, had a formidable mounting tab (which he no longer needed) protruding from one side.

The video after the break shows the result of shooting with a very shallow depth of field. For those who already have a manual lens but lack the autofocus motor, a conversion hack works like a charm as well.

Edit: This article originally stated the video demonstrates a large depth of field. This is incorrect. The video demonstrates a very shallow depth of field. The error has been corrected.


Filed under: digital cameras hacks, repair hacks

Little Laser Light Show is Cleverly Packaged, Cheap to Build

เสาร์, 04/22/2017 - 21:00

We’re suckers for any project that’s nicely packaged, but an added bonus is when most of the components can be sourced cheaply and locally. Such is the case for this little laser light show, housed in electrical boxes from the local home center and built with stuff you probably have in your junk bin.

When we first came across [replayreb]’s write-up and saw that he used hard drives in its construction, we assumed he used head galvanometers to drive the mirrors. As it turns out, he used that approach in an earlier project, but this time around, the hard drive only donated its platters for use as low mass, first surface mirrors. And rather than driving the mirrors with galvos, he chose plain old brushed DC motors. These have the significant advantage of being cheap and a perfect fit for 3/4″ EMT set-screw connectors, designed to connect thin-wall conduit, also known as electromechanical tubing, to electrical boxes and panels. The motors are mounted to the back and side of the box so their axes are 90° from each other, and the mirrors are constrained by small cable ties and set at 45°. The motors are driven directly by the left and right channels of a small audio amp, wiggling enough to create a decent light show from the laser module.

We especially like the fact that these boxes are cheap enough that you can build three with different color lasers. In that case, an obvious next step would be bandpass filters to split the signal into bass, midrange, and treble for that retro-modern light organ effect. Or maybe figuring out what audio signals you’d need to make this box into a laser sky display would be a good idea too.


Filed under: laser hacks

The Internet of Rice Cookers

เสาร์, 04/22/2017 - 18:00

You’d be forgiven for thinking this was going to be an anti-IoT rant: who the heck needs an IoT rice cooker anyway? [Microentropie], that’s who. His rice cooker, like many of the cheapo models, terminates heating by detecting a temperature around 104° C, when all the water has boiled off. But that means the bottom of the rice is already dried out and starting to get crispy. (We love the crust! But this hack is not for us. This hack is for [Microentropie].)

So [Microentropie] added some relays, a temperature sensor, and an ESP8266 to his rice cooker, creating the Rice Cooker 2.0, or something. He tried a few complicated schemes but was unwilling to modify any of the essential safety features of the cooker. In the end [Microentropie] went with a simple time-controlled cooking cycle, combined with a keep-warm mode and of course, notification of all of this through WiFi.

There’s a lot of code making this simple device work. For instance, [Microentropie] often forgets to press the safety reset button, so the ESP polls for it, and the web interface has a big red field to notify him of this. [Microentropie] added a password-protected login to the rice cooker as well. Still, it probably shouldn’t be put on the big wide Internet. The cooker also randomizes URLs for firmware updates, presumably to prevent guests in his house from flashing new firmware to his rice cooker. There are even custom time and date classes, because you know you don’t want your rice cooker using inferior code infrastructure.

In short, this is an exercise in scratching a ton of personal itches, and we applaud that. Next up is replacing the relays with SSRs so that the power can be controlled with more finesse, adding a water pump for further automation, and onboard data logging. Overkill, you say? What part of “WiFi-enabled rice cooker” did you not understand?


Filed under: cooking hacks, home hacks

[Hari] Prints an Awesome Spider Robot

เสาร์, 04/22/2017 - 15:00

Although we have strong suspicions that the model’s designer failed entomology, this spider robot is very cool. [Hari Wiguna] made one, and is justifiably thrilled with the results. (Watch his summary on YouTube embedded below.)

Thanks to [Regis Hsu]’s nice design, all [Hari] had to do was order a hexapod’s dozen 9g servos for around $20, print out the parts, attach an Arduino clone, and he was done. We really like the cutouts in the printed parts that nicely fit the servo horns. [Hari] says the calibration procedure is a snap; you run a sketch that sets all the servos to a known position and then tighten the legs in place. Very slick.

The parts should print without support on basically any printer. [Hari]’s is kinda janky and exhibits all sorts of layer-to-layer irregularities (sorry, man!) but the robot works perfectly. Which is not to say that [Hari] doesn’t have assembly skills — check out the world’s smallest (?) RGB LED cube if you think this guy can’t solder. Of course, you can entirely sidestep the 3D-printed parts and just fix a bunch of servos together and call it a robot. It’s harder to make building a four-legger any easier than these two projects. What are you waiting for?


Filed under: robots hacks

An 8-Bit Transport Triggered Architecture CPU in TTL

เสาร์, 04/22/2017 - 12:00

When we are introduced to the internals of a microprocessor, it is most likely that we will be shown something like one of the first generation of 8-bit CPUs from the 1970s. There will be the familiar group of registers and counters, an arithmetic and logic unit (ALU), and an instruction decoder with associated control logic. A complex instruction set causes the decoder to marshal registers and ALU to perform all the various functions in the right order. CPUs may have moved on in many ways since the 1970s, but the block diagram of an 8080 or similar still provides a basic grounding for the beginner.

So when we tell you about another home-made CPU using TTL logic chips, you might expect it to follow this well-worn path. Fortunately though the hardware hacking community is always capable of springing surprises upon us, and [Szoftveres] has done just that with his design. It’s a one-instruction-set machine following a transport triggered architecture, and that means it deviates sharply from the conventional architecture described above. Each instruction is a move between the different physical functions of the processor, and computation is achieved by the physical functions working on the data as it is moved into them and presenting the result on their outputs ready to be moved elsewhere. The result is a computer that is in its own way beautifully simple, though at the expense of some inflexibility and lack of some hardware functions we take for granted in more conventional processors.

This machine has been built on a piece of stripboard, and has an accompanying board with display, keypad, and a modem. There is a small board based upon an ATmega8 microcontroller which performs the function of fast program loading, and can be removed once the code is loaded. Software can be written in a C-like language anc compiled using the compiler in his GitHub repository, and he has produced a YouTube video of the machine in operation. This project is well worth reading through in-depth, for its introduction to this slightly unusual architecture.

We have brought you numerous 74 TTL logic CPUs over the years, but surprisingly this one is not the first single-instruction design.


Filed under: computer hacks