Hackaday

Syndicate content Hackaday
Fresh hacks every day
ถูกปรับปรุง 49 min 3 sec ก่อน

Motion Detecting Camera Recognizes Humans Using The Cloud

อังคาร, 02/07/2017 - 07:00

[Mark West] and his wife had a problem, they’d been getting unwanted guests in their garden. Mark’s solution was to come up with a motion activated security camera system that emails him when a human moves in the garden. That’s right, only a human. And to make things more interesting from a technical standpoint, he does much of the processing in the cloud. He sends the cloud a photo with something moving in it, and he’s sent an email only if it has a human in it.

[Mark]’s first iteration, described very well on his website, involved putting together all off-the-shelf components including a Raspberry Pi Zero, the Pi NoIR camera and the ZeroView Camera Mount that let him easily mount it all on the inside of his window looking out to the garden. He used Motion to examine the camera’s images and look for any frames with movement. His code sent him an email with a photo every time motion was detected. The problem was that on some days he got email alerts with as many as 50 false positives: moving shadows, the neighbor’s cat, even rain on the window.

That lead him to his next iteration, checking for humans in the photos. For that he chose to pass the photos on to the Amazon Web Services (AWS) Rekognition online tool to check for humans.

But that left a decision. He could either send the images with motion in them to Rekognition, get back the result, and then send an email to himself about those that contained humans, or he could just send the images off to the cloud, let the cloud talk to Rekognition, and have the cloud send him the email. He chose to let the cloud do it so that the Pi Zero and the cloud components would be more decoupled, making changes to either one easier.

There isn’t room here to go through all the details of how he did it on the cloud and so we’ll leave that to [Mark’s] detailed write-up. But suffice it to say that it makes for a very interesting read. Most interesting is that it’s possible. It means that hacks with processor-constrained microcontrollers can not only do sophisticated AI tasks using online services but can also offload some of the intermediate tasks as well.

But what if you want to detect more than just motion?  [Armagan C.] also uses a Raspberry Pi camera but adds thermal, LPG and CO2 sensors. Of if it’s just the cats that are the problem then how about a PIR sensor for detection and an automatic garden hose for deterrence?


Filed under: security hacks

Ham Radio Trips Circuit Breakers

อังคาร, 02/07/2017 - 04:00

Arc-fault circuit breakers are a boon for household electrical safety. The garden-variety home electrical fire is usually started by the heat coming from a faulty wire arcing over. But as any radio enthusiast knows, sparks also give off broadband radio noise. Arc-fault circuit interrupters (AFCI) are special circuit breakers that listen for this noise in the power line and trip when they hear it. The problem is that they can be so sensitive that they cut out needlessly. Check out the amusing video below the break.

Our friend [Martin] moved into a new house, and discovered that he could flip the breakers by transmitting on the 20-meter band. “All the lights in the place went out and my rig switched over to battery. I thought it was strange as I was certainly drawing less than 20 A. I reset the breakers and keyed up again. I reset the breakers again and did a [expletive] Google search.”

And of course, it’s a known problem in the Ham community. In particular, one manufacturer has had serious problems misinterpreting intentional radiation, and went to the amateur radio community for help to prototype a new version. [Martin] got sent complimentary Ham-resistent breakers when he called the manufacturer and let them know, so all’s well that ends well.


Filed under: wireless hacks

M&Ms and Skittles Sorting Machine is Both Entertainment and Utility

อังคาร, 02/07/2017 - 02:35

If you have OCD, then the worst thing someone could do is give you a bowl of multi-coloured M&M’s or Skittles — or Gems if you’re in the part of the world where this was written. The candies just won’t taste good until you’ve managed to sort them in to separate coloured heaps. And if you’re a hacker, you’ll obviously build a sorting machine to do the job for you.

Use our search box and you’ll find a long list of coverage describing all manner and kinds of sorting machines. And while all of them do their designated job, 19 year old [Willem Pennings]’s m&m and Skittle Sorting Machine is the bees knees. It’s one of the best builds we’ve seen to date, looking more like a Scandinavian Appliance than a DIY hack. He’s ratcheted up a 100k views on Youtube, 900k views on imgur and almost 2.5k comments on reddit, all within a day of posting the build details on his blog.

As quite often happens, his work is based on an earlier design, but he ends up adding lots of improvements to his version. It’s got a hopper at the top for loading either m&m’s or Skittles and six bowls at the bottom to receive the color sorted candies. The user interface is just two buttons — one to select between the two candy types and another to start the sorting. The hardware is all 3D printed and laser cut. But he’s put in extra effort to clean the laser cut pieces and paint them white to give it that neat, appliance look. The white, 3D printed parts add to the appeal.

Rotating the input funnel to prevent the candies from clogging the feed pipes is an ace idea. A WS2812 LED is placed above each bowl, lighting up the bowl where the next candy will be ejected and at the same time, a WS2812 strip around the periphery of the main body lights up with the color of the detected candy, making it a treat, literally, to watch this thing in action. His blog post has more details about the build, and the video after the break shows the awesome machine in action.

And if you’re interested in checking out how this sorter compares with some of the others, check out these builds — Skittles sorting machine sorts Skittles and keeps the band happy, Anti-Entropy Machine Satiates M&M OCD, Only Eat Red Skittles? We’ve Got You Covered, and Hate Blue M&M’s? Sort Them Using the Power of an iPhone!  As we mentioned earlier, candy sorting machines are top priority for hackers.

[via r/electronics]


Filed under: Arduino Hacks, cooking hacks

More Power: Powel Crosley and the Cincinnati Flamethrower

อังคาร, 02/07/2017 - 01:01

We tend to think that there was a time in America when invention was a solo game. The picture of the lone entrepreneur struggling against the odds to invent the next big thing is an enduring theme, if a bit inaccurate and romanticized. Certainly many great inventions came from independent inventors, but the truth is that corporate R&D has been responsible for most of the innovations from the late nineteenth century onward. But sometimes these outfits are not soulless corporate giants. Some are founded by one inventive soul who drives the business to greatness by the power of imagination and marketing. Thomas Edison’s Menlo Park “Invention Factory” comes to mind as an example, but there was another prolific inventor and relentless promoter who contributed vastly to the early consumer electronics industry in the USA: Powel Crosley, Jr.

Born a Gearhead Powel Crosley, Jr. and Bonzo with his “Crosley Pup” radio. Source: Ohio History Central

Although Powel Crosley’s fortune would be made with radio, his first love was the automobile. At the turn of the 20th century, a thirteen-year-old Crosley started building his first car. His father, a prominent Cincinnati lawyer and clearly a man of means, bet the princely sum of $10 that his son couldn’t complete the car. Powel enlisted the help of his brother Lewis, and together they finished the car, complete with a hand-built electric motor. They won the bet and began what would turn into a long business partnership.

College eventually beckoned, but Powel’s interest in cars distracted him enough that he dropped out after a couple of years. He tried various automotive ventures with mixed results; while starting a car company was his dream, he seemed to be more adept at inventing various small gadgets for the burgeoning car culture in America. By 1919, Powel and Lewis had amassed a two million dollar fortune and began to look for opportunities in other parts of the growing consumer markets.

Crosley’s Pup

The big growth industry in the 1920s was radio, and it was clear to the Crosley boys that there were fortunes to be made in the new field. But radio was still new, and even a rich man like Powel would balk at the $100 price tag for a store-bought set. So when his son asked for a radio, Powel chose the hacker way – he bought a book on radio so they could build one together.

Crosley Pup. Source: Stone Vintage Radio

Intrigued by radio’s potential and with the resources of a manufacturing operation to draw on, Crosley was soon selling complete radio sets. By 1924, Crosley was the biggest radio manufacturer in the world, and in 1925, their iconic “Crosley Pup” radio hit the market. A simple one-tube regenerative receiver, the Pup brought radio to the masses. Its $9.75 price point was possible because of its low parts count – the tube acted as both amplifier and detector, a simple LC tank circuit tuned the radio, and a “tickler coil” could be manually adjusted to feed part of the amplified RF signal back to the tuned circuit and create a positive feedback loop to further amplify the signal.

Thanks to a brilliant marketing campaign using Powel’s dog Bonzo as a mascot, the Pup was a consumer hit. But Crosley wasn’t satisfied, and saw trouble brewing on the horizon. His Pup radio was threatened by a simple fact – there just weren’t that many radio stations on the air at the time. Those that were on the air were generally low-power broadcasters, which also posed a problem for the Pup – it wasn’t a particularly “hot” receiver given the sacrifices made to keep the price affordable.

Powel’s answer to the problem is a classic of early 20th-century capitalism and a model of vertical integration: he would build his own radio station.

More Power

In the much simpler regulatory environment of the day, Powel had been experimenting with radio broadcasts since 1921, and in 1922 station WLW began operation from Cincinnati on 700 kHz. Radiating a measly 50 watts at first, Crosley began goosing up the power over the years, on the theory that the more power he used, the cheaper he could build his radios. First 500 watts, then 1,000 watts by 1924, and 5,000 watts the next year. In 1928, Crosley increased WLW’s output to 50,000 watts, making it the most powerful radio station in the world. It could be heard from New York to Florida on a clear night, but Crosley had his sights set higher. Much higher.

In 1933, construction began on a 500,000 watt amplifier for WLW. It’s not clear that there was a valid business justification for taking WLW to ludicrous power, since about 90% of the population of the United States was already in range of WLW at 50 kW. But Crosley was on a tear – 1934 was also the year he purchased the Cincinnati Reds baseball team. So hubris no doubt played a part in WLW’s power boost.

But that hubris came at great expense. A new electrical substation devoted exclusively to WLW had to be constructed, as did ponds to cool the water used to handle the heat from the amplifiers. Gigantic antennas sprouted from the WLW campus in Mason, Ohio, and in January of 1934, WLW began broadcasts at half a million watts.

The complaints began pouring in almost immediately. WLW was blasting other stations off the air, with Toronto stations being particularly vulnerable. By December of 1934, WLW was required to reduce its nighttime emissions back to 50,000 watts while it worked on solutions to the interference problem. New antennas were built and were fed out-of-phase signals to shape the radiation pattern enough to solve the Toronto problem, and WLW was turned back up to full power around-the-clock operation in 1935.

But WLW’s days as the most powerful radio station in the world were numbered. As is often the case with new technologies, politicians finally saw something that could be regulated and in 1938 passed legislation banning broadcast stations over 50,000 watts. WLW’s 500-kW license expired and the Cincinnati Flamethrower was back to the new 50 kW legal limit, where it still operates to this day. Even at this level, WLW can still be heard at night in 38 states.

Since Crosley had plans to take WLW to 750,000 watts before Congress got involved, it’s safe to say that he was disappointed by what became of his baby. But Crosley was never one to rest on his laurels, and while radio remained a huge focus of his business – he would own WLW until after WWII – he had many other business interests. He was a pioneer in home refrigeration, inventing a kerosene-powered fridge for use in rural homes without electricity and putting the first shelves on the door of a refrigerator. He invented the concepts of night baseball and play-by-play radio broadcasts, both of which increased revenues from his Cincinnati Reds so much that all Major League Baseball franchises soon copied his model.

Cars, radio, sports, appliances, airplanes, even early TV broadcasts – Crosley had a hand in almost every major piece of the 20th-century’s consumer culture.


Filed under: Curated, Featured, History

Will Your CAD Software Company Own Your Files, Too?

จันทร์, 02/06/2017 - 23:30

We’re used to the relationship between the commercial software companies from whom we’ve bought whichever of the programs we use on our computers, and ourselves as end users. We pay them money, and they give us a licence to use the software. We then go away and do our work on it, create our Microsoft Word documents or whatever, and those are our work, to do whatever we want with.

There are plenty of arguments against this arrangement from the world of free software, indeed many of us choose to heed them and run open source alternatives to the paid-for packages or operating systems. But for the majority of individuals and organisations the commercial model is how they consume software. Pay for the product, use it for whatever you want.

What might happen were that commercial model to change? For instance, if the output of your commercial software retained some ownership on the part of the developer, so for example maybe a word processor company could legally prevent you opening a document in anything but their word processor or viewer. It sounds rather unreasonable, and maybe even far-fetched, but there is an interesting case in California’s Ninth Circuit court that could make that a possibility.

The software in question is a specialist CAD package for structural steelwork, and concerns a company that had outsourced its CAD work to China. The claim being made is that the ownership lies in “expressive content that is not in the actual design of the component, such as the font or the colors used, the shape of a comment box, or the placement of certain components around the design which appear in the design file, but which are not the design itself“.

Almost all creative software comes pre-loaded with this form of content, whether it is a font, a component in a CAD library, a predefined rounded box for creating a flow chart, or a sound sample. While the court case in question is a minor one in a niche corner of one industry, its potential for a precedent means we should all keep an eye on it. The possibility of this pre-loaded content being used to exert ownership over output files is an extremely worrying one, and while many software companies explicitly grant a licence to use them it’s likely that there would be developers who would be unable to resist the chance to make more cash through this means.

Our community has recently had a nasty surprise on the software licensing front, with EAGLE moving to a subscription model. Let’s hope this doesn’t turn into another one. Meanwhile there’s our series on creating a PCB in KiCAD, should you wish to make the jump to open source.

Via Hacker News.

CAD model of a power station: VGB PowerTech [CC-BY-SA-3.0], via Wikimedia Commons.


Filed under: Software Development

AI and the Ghost in the Machine

จันทร์, 02/06/2017 - 22:01

The concept of artificial intelligence dates back far before the advent of modern computers — even as far back as Greek mythology. Hephaestus, the Greek god of craftsmen and blacksmiths, was believed to have created automatons to work for him. Another mythological figure, Pygmalion, carved a statue of a beautiful woman from ivory, who he proceeded to fall in love with. Aphrodite then imbued the statue with life as a gift to Pygmalion, who then married the now living woman.

Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons

Throughout history, myths and legends of artificial beings that were given intelligence were common. These varied from having simple supernatural origins (such as the Greek myths), to more scientifically-reasoned methods as the idea of alchemy increased in popularity. In fiction, particularly science fiction, artificial intelligence became more and more common beginning in the 19th century.

But, it wasn’t until mathematics, philosophy, and the scientific method advanced enough in the 19th and 20th centuries that artificial intelligence was taken seriously as an actual possibility. It was during this time that mathematicians such as George Boole, Bertrand Russel, and Alfred North Whitehead began presenting theories formalizing logical reasoning. With the development of digital computers in the second half of the 20th century, these concepts were put into practice, and AI research began in earnest.

Over the last 50 years, interest in AI development has waxed and waned with public interest and the successes and failures of the industry. Predictions made by researchers in the field, and by science fiction visionaries, have often fallen short of reality. Generally, this can be chalked up to computing limitations. But, a deeper problem of the understanding of what intelligence actually is has been a source a tremendous debate.

Despite these setbacks, AI research and development has continued. Currently, this research is being conducted by technology corporations who see the economic potential in such advancements, and by academics working at universities around the world. Where does that research currently stand, and what might we expect to see in the future? To answer that, we’ll first need to attempt to define what exactly constitutes artificial intelligence.

Weak AI, AGI, and Strong AI

You may be surprised to learn that it is generally accepted that artificial intelligence already exists. As Albert (yes, that’s a pseudonym), a Silicon Valley AI researcher, puts it: “…AI is monitoring your credit card transactions for weird behavior, AI is reading the numbers you write on your bank checks. If you search for ‘sunset’ in the pictures on your phone, it’s AI vision that finds them.” This sort of artificial intelligence is what the industry calls “weak AI”.

Weak AI

Weak AI is dedicated to a narrow task, for example Apple’s Siri. While Siri is considered to be AI, it is only capable of operating in a pre-defined range that combines a handful a narrow AI tasks. Siri can perform language processing, interpretations of user requests, and other basic tasks. But, Siri doesn’t have any sentience or consciousness, and for that reason many people find it unsatisfying to even define such a system as AI.

Albert, however, believes that AI is something of a moving target, saying “There is a long running joke in the AI research community that once we solve something then people decide that it’s not real intelligence!” Just a few decades ago, the capabilities of an AI assistant like Siri would have been considered AI. Albert continues, “People used to think that chess was the pinnacle of intelligence, until we beat the world champion. Then they said that we could never beat Go since that search space was too large and required ‘intuition’. Until we beat the world champion last year…”

Strong AI

Still, Albert, along with other AI researchers, only defines these sorts of systems as weak AI. Strong AI, on the other hand, is what most laymen think of when someone brings up artificial intelligence. A Strong AI would be capable of actual thought and reasoning, and would possess sentience and/or consciousness. This is the sort of AI that defined science fiction entities like HAL 9000, KITT, and Cortana (in Halo, not Microsoft’s personal assistant).

Artificial General Intelligence

What actually constitutes a strong AI and how to test and define such an entity is a controversial subject full of heated debate. By all accounts, we’re not very close to having strong AI. But, another type of system, AGI (Artificial General Intelligence), is a sort of bridge between weak AI and strong AI. While AGI wouldn’t possess the sentience of a Strong AI, it would be far more capable than weak AI. A true AGI could learn from information presented to it, and could answer any question based on that information (and could perform tasks related to it).

While AGI is where most current research in the field of artificial intelligence is focused, the ultimate goal for many is still strong AI. After decades, even centuries, of strong AI being a central aspect of science fiction, most of us have taken for granted the idea that a sentient artificial intelligence will someday be created. However, many believe that this isn’t even possible, and a great deal of the debate on the topic revolves around philosophical concepts regarding sentience, consciousness, and intelligence.

Consciousness, AI, and Philosophy

This discussion starts with a very simple question: what is consciousness? Though the question is simple, anyone who has taken an Introduction to Philosophy course can tell you that the answer is anything but. This is a question that has had us collectively scratching our heads for millennia, and few people who have seriously tried to answer it have come to a satisfactory answer.

What is Consciousness?

Some philosophers have even posited that consciousness, as it’s generally thought of, doesn’t even exist. For example, in Consciousness Explained, Daniel Dennett argues the idea that consciousness is an elaborate illusion created by our minds. This is a logical extension of the philosophical concept of determinism, which posits that everything is a result of a cause only having a single possible effect. Taken to its logical extreme, deterministic theory would state that every thought (and therefore consciousness) is the physical reaction to preceding events (down to atomic interactions).

Most people react to this explanation as an absurdity — our experience of consciousness being so integral to our being that it is unacceptable. However, even if one were to accept the idea that consciousness is possible, and also that oneself possesses it, how could it ever be proven that another entity also possesses it? This is the intellectual realm of solipsism and the philosophical zombie.

Solipsism is the idea that a person can only truly prove their own consciousness. Consider Descartes’ famous quote “Cogito ergo sum” (I think therefore I am). While to many this is a valid proof of one’s own consciousness, it does nothing to address the existence of consciousness in others. A popular thought exercise to illustrate this conundrum is the possibility of a philosophical zombie.

Philosophical Zombies

A philosophical zombie is a human who does not possess consciousness, but who can mimic consciousness perfectly. From the Wikipedia page on philosophical zombies: “For example, a philosophical zombie could be poked with a sharp object and not feel any pain sensation, but yet behave exactly as if it does feel pain (it may say “ouch” and recoil from the stimulus, and say that it is in pain).” Further, this hypothetical being might even think that it did feel the pain, though it really didn’t.

No, not that kind of zombie [The Walking Dead, AMC]As an extension of this thought experiment, let’s posit that a philosophical zombie was born early in humanity’s existence that possessed an evolutionary advantage. Over time, this advantage allowed for successful reproduction and eventually conscious human beings were entirely replaced by these philosophical zombies, such that every other human on Earth was one. Could you prove that all of the people around you actually possessed consciousness, or if they were just very good at mimicking it?

This problem is central to the debate surrounding strong AI. If we can’t even prove that another person is conscious, how could we prove that an artificial intelligence was? John Searle not only illustrates this in his famous Chinese room thought experiment, but further puts forward the opinion that conscious artificial intelligence is impossible in a digital computer.

The Chinese Room

The Chinese room argument as Searle originally published it goes something like this: suppose an AI were developed that takes Chinese characters as input, processes them, and produces Chinese characters as output. It does so well enough to pass the Turing test. Does it then follow that the AI actually “understood” the Chinese characters it was processing?

Searle says that it doesn’t, but that the AI was just acting as if it understood the Chinese. His rationale is that a man (who understands only English) placed in a sealed room could, given the proper instructions and enough time, do the same. This man could receive a request in Chinese, follow English instructions on what to do with those Chinese characters, and provide the output in Chinese. This man never actually understood the Chinese characters, but simply followed the instructions. So, Searle theorizes, would an AI not actually understand what it is processing, it’s just acting as if it does.

An illustration of the Chinese room, courtesy of cognitivephilosophy.net

It’s no coincidence that the Chinese room thought exercise is similar to the idea of a philosophical zombie, as both seek to address the difference between true consciousness and the appearance of consciousness. The Turing Test is often criticized as being overly simplistic, but Alan Turing had carefully considered the problem of the Chinese room before introducing it. This was more than 30 years before Searle published his thoughts, but Turing had anticipated such a concept as an extension of the “problem of other minds” (the same problem that’s at the heart of solipsism).

Polite Convention

Turing addressed this problem by giving machines the same “polite convention” that we give to other humans. Though we can’t know that other humans truly possess the same consciousness that we do, we act as if they do out of a matter of practicality — we’d never get anything done otherwise. Turing believed that discounting an AI based on a problem like the Chinese room would be holding that AI to a higher standard than we hold other humans. Thus, the Turing Test equates perfect mimicry of consciousness with actual consciousness for practical reasons.

Alan Turing, creator of the Turing Test and the “polite convention” philosophy

This dismissal of defining “true” consciousness is, for now, best to philosophers as far as most modern AI researchers are concerned. Trevor Sands (an AI researcher for Lockheed Martin, who stresses that his statements reflect his own opinions, and not necessarily those of his employer) says “Consciousness or sentience, in my opinion, are not prerequisites for AGI, but instead phenomena that emerge as a result of intelligence.”

Albert takes an approach which mirrors Turing’s, saying “if something acts convincingly enough like it is conscious we will be compelled to treat it as if it is, even though it might not be.” While debates go on among philosophers and academics, researchers in the field have been working all along. Questions of consciousness are set aside in favor of work on developing AGI.

History of AI Development

Modern AI research was kicked off in 1956 with a conference held at Dartmouth College. This conference was attended by many who later become experts in AI research, and who were primarily responsible for the early development of AI. Over the next decade, they would introduce software which would fuel excitement about the growing field. Computers were able to play (and win) at checkers, solve math proofs (in some cases, creating solutions more efficient than those done previously by mathematicians), and could provide rudimentary language processing.

Unsurprisingly, the potential military applications of AI garnered the attention of the US government, and by the ’60s the Department of Defense was pouring funds into research. Optimism was high, and this funded research was largely undirected. It was believed that major breakthroughs in artificial intelligence were right around the corner, and researchers were left to work as they saw fit. Marvin Minsky, a prolific AI researcher of the time, stated in 1967 that “within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”

Unfortunately, the promise of artificial intelligence wasn’t delivered upon, and by the ’70s optimism had faded and government funding was substantially reduced. Lack of funding meant that research was dramatically slowed, and few advancements were made in the following years. It wasn’t until the ’80s that progress in the private sector with “expert systems” provided financial incentives to invest heavily in AI once again.

Throughout the ’80s, AI development was again well-funded, primarily by the American, British, and Japanese governments. Optimism reminiscent of that of the ’60s was common, and again big promises about true AI being just around the corner were made. Japan’s Fifth Generation Computer Systems project was supposed to provide a platform for AI advancement. But, the lack of fruition of this system, and other failures, once again led to declining funding in AI research.

Around the turn of the century, practical approaches to AI development and use were showing strong promise. With access to massive amounts of information (via the internet) and powerful computers, weak AI was proving very beneficial in business. These systems were used to great success in the stock market, for data mining and logistics, and in the field of medical diagnostics.

Over the last decade, advancements in neural networks and deep learning have led to a renaissance of sorts in the field of artificial intelligence. Currently, most research is focused on the practical applications of weak AI, and the potential of AGI. Weak AI is already in use all around us, major breakthroughs are being made in AGI, and optimism about artificial intelligence is once again high.

Current Approaches to AI Development

Researchers today are investing heavily into neural networks, which loosely mirror the way a biological brain works. While true virtual emulation of a biological brain (with modeling of individual neurons) is being studied, the more practical approach right now is with deep learning being performed by neural networks. The idea is that the way a brain processes information is important, but that it isn’t necessary for it to be done biologically.

Neural networks use simple nodes connected to form complex systems [Photo credit: Wikipedia]As an AI researcher specializing in deep learning, it’s Albert’s job to try to teach neural networks to answer questions. “The dream of question answering is to have an oracle that is able to ingest all of human knowledge and be able to answer any questions about this knowledge” is Albert’s reply when asked what his goal is. While this isn’t yet possible, he says “We are up to the point where we can get an AI to read a short document and a question and extract simple information from the document. The exciting state of the art is that we are starting to see the beginnings of these systems reasoning.”

Trevor Sands does similar work with neural networks for Lockheed Martin. His focus is on creating “programs that utilize artificial intelligence techniques to enable humans and autonomous systems to work as a collaborative team.” Like Albert, Sands uses neural networks and deep learning to process huge amounts of data intelligently. The hope is to come up with the right approach, and to create a system which can be given direction to learn on its own.

Albert describes the difference between weak AI, and the more recent neural network approaches “You’d have vision people with one algorithm, and speech recognition with another, and yet others for doing NLP (Natural Language Processing). But, now they are all moving over to use neural networks, which is basically the same technique for all these different problems. I find this unification very exciting. Especially given that there are people who think that the brain and thus intelligence is actually the result of a single algorithm.”

Basically, as an AGI, the ideal neural network would work for any kind of data. Like the human mind, this would be true intelligence that could process any kind of data it was given. Unlike current weak AI systems, it wouldn’t have to be developed for a specific task. The same system that might be used to answer questions about history could also advise an investor on which stocks to purchase, or even provide military intelligence.

Next Week: The Future of AI

As it stands, however, neural networks aren’t sophisticated enough to do all of this. These systems must be “trained” on the kind of data they’re taking in, and how to process it. Success is often a matter of trial and error for Albert “Once we have some data, then the task is to design a neural network architecture that we think will perform well on the task. We usually start with implementing a known architecture/model from the academic literature which is known to work well. After that I try to think of ways to improve it. Then I can run experiments to see if my changes improve the performance of the model.”

The ultimate goal, of course, is to find that perfect model that works well in all situations. One that doesn’t require handholding and specific training, but which can learn on its own from the data it’s given. Once that happens, and the system can respond appropriately, we’ll have developed Artificial General Intelligence.

Researchers like Albert and Trevor have a good idea of what the Future of AI will look like. I discussed this at length with both of them, but have run out of time today. Make sure to join me next week here on Hackaday for the Future of AI where we’ll dive into some of the more interesting topics like ethics and rights. See you soon!


Filed under: Featured, Interest, robots hacks, slider

Amazing 3D-Scanner Teardown and Rebuild

จันทร์, 02/06/2017 - 19:01

Pour yourself a nice hot cup of tea, because [iliasam]’s latest work on a laser rangefinder (in Russian, translated here) is a long and interesting read. The shorter version is that he got his hands on a broken laser security scanner, nearly completely reverse-engineered it, got it working again, put it on a Roomba that was able to map out his apartment, and then re-designed it to become a tripod-mounted, full-room 3D scanner. Wow.

The scanner in question has a spinning mirror and a laser time-of-flight ranger, and is designed to shut down machinery when people enter a “no-go” region. As built, it returns ranges along a horizontal plane — it’s a 2D scanner. The conversion to a 3D scanner meant adding another axis, and to do this with sufficient precision required flipping the rig on its side, salvaging the fantastic bearings from a VHS machine, and driving it all with the surprisingly common A4988 stepper driver and an Arduino. A program on a PC reads in the data, and the stepper moves another 0.36 degrees. The results speak for themselves.

This isn’t [iliasam]’s first laser-rangefinder project, naturally. We’ve previously featured his homemade parallax-based ranger for use on a mobile robot, which is equally impressive. What amazes us most about these builds is the near-professional quality of the results pulled off on a shoestring budget.


Filed under: laser hacks

Reverse Engineering Ikea’s New Smart Bulbs

จันทร์, 02/06/2017 - 16:01

Over in Sweden, Czech, Italy, and Belgium, Ikea is launching a new line of ‘smart’ light bulbs. These countries are apparently the test market for these bulbs, and they’ll soon be landing on American shores. This means smart Ikea bulbs will be everywhere soon, and an Internet of Light Bulbs is a neat thing to explore. [Markus] got his hands on a few of these bulbs, and is now digging into their inner workings (German Make Magazine, with a Google Translate that includes the phrase, ‘capering the pear’).

There are currently four versions of these Ikea bulbs, ranging from a 400 lumen bulb designed for track lights to a 980 lumen bulb that will probably work in an American Edison lamp socket. These lights are controlled via a remote, with each individual bulb paired to the remote by turning the lamp on, holding the remote close to the bulb, and pressing a button.

Inside these bulbs is a Silicon Labs microcontroller with ZigBee support, twelve chip LEDs, and associated electronics that look like they might pass the bigclivedotcom smoke test. After tearing apart this bulb and planting the wireless module firmly in a breadboard, [Markus] found he could dim a pair of LEDs simply by clicking on the remote. Somewhere in these bulbs, there’s a possibility of doing something.

As with all Internet of Things, we must ask an important question: will it become part of Skynet and shut down the Internet, like webcams did last summer? These Ikea bulbs look pretty safe in that regard, as the bulb is inexorably tied to the remote and must be paired by holding it close to the bulb. We’re sure there are a few more interesting exploits for these bulbs, so once they’re released in the US we’ll take a look at them.


Filed under: home hacks

Wearable Predicts Tone of Conversation from Speech, Vital Signs

จันทร์, 02/06/2017 - 13:01

If you’ve ever wondered how people are really feeling during a conversation, you’re not alone. By and large, we rely on a huge number of cues — body language, speech, eye contact, and a million others — to determine the feelings of others. It’s an inexact science to say the least. Now, researchers at MIT have developed a wearable system to analyze the tone of a conversation.

The system uses Samsung Simband wearables, which are capable of measuring several physiological markers — heart rate, blood pressure, blood flow, and skin temperature — as well as movement thanks to an on-board accelerometer. This data is fed into a neural network which was trained to classify a conversation as “happy” or “sad”. Training consisted of capturing 31 conversations of several minutes duration each, where participants were asked to tell a happy or sad story of their own choosing. This was done in an effort to record more organic emotional states than simply eliciting emotion through the use of more typical “happy” or “sad” video materials often used in similar studies.

The technology is in a very early stage of development, however the team hopes that down the road, the system will be sufficiently advanced to act as an emotional coach in real-life social situations. There is a certain strangeness about the idea of asking a computer to tell you how a person is feeling, but if humans are nothing more than a bag of wet chemicals, there might be merit in the idea yet. It’s a pretty big if.

Machine learning is becoming more powerful on a daily basis, particularly as we have ever greater amounts of computing power to throw behind it. Check out our primer on machine learning to get up to speed.

[Thanks to Adam for the tip!]


Filed under: wearable hacks

Gliding To Underwater Filming Success

จันทร์, 02/06/2017 - 10:01

If you are a fan of nature documentaries you will no doubt have been wowed by their spectacular underwater sequences. So when you buy a GoPro or similar camera and put it in a waterproof case accessory, of course you take it with you when you go swimming. Amazing footage and international documentary stardom awaits!

Of course, your results are disappointing. The professionals have years of experience and acquired skill plus the best equipment money can buy, and you just have your hand, and a GoPro. The picture is all over the place, and if there is a subject it’s extremely difficult to follow.

[Steve Schmitt] has an answer to this problem, and it’s a refreshingly simple one. He’s built an underwater glider to which he attaches his camera and launches across the submerged vista he wishes to film. Attached to a long piece of line for retrieval, it is set to glide gently downwards at a rate set by the position of the camera on its boom.

Construction is extremely simple. The wing is a delta-shaped piece of corrugated plastic roofing sheet, while the fuselage is a piece of plastic pipe. A T-connector has the camera mount on it, and this can slide along the fuselage for pre-launch adjustments. It’s that simple, but of course sometimes the best builds are the simple ones. He’s put up a video which you can see below the break, showing remarkable footage of a test flight through a cold-water spring.

If you want to try your hand at underwater photography we can direct you to some underwater camera housings, but be careful that you avoid being attacked by a shark!


Filed under: digital cameras hacks

Hackaday Links: February 5, 2017

จันทร์, 02/06/2017 - 07:00

A lot of people around here got their start in electronics with guitar pedals. This means soldering crappy old transistors to crappy old diodes and fawning over your tonez, d00d.  Prototyping guitar pedals isn’t easy, though, and now there’s a CrowdSupply project to make it easier The FX Development Board is just that — a few 1/4″ jacks, knobs, pots, power supply, and a gigantic footswitch to make prototyping guitar pedals and other musical paraphernalia easy. Think of it as a much more feature-packed Beavis Board that’s still significantly cheaper.

How do Communicators in Star Trek work? Nobody knows. Why don’t the crew always have to tap their badge before using it? Nobody knows. How can the com badge hear, ‘Geordi to Worf’, and have Worf instantly respond? Oh, we’ve argued about this on IRC for years now. Over on Hackaday.io, [Joe] is building a Star Trek com badge. The electronics are certainly possible with modern microcontrollers, but for the enclosure, we’ll have to review a few scenes from Time’s Arrow and The Enemy.

[Alois] was working with an Intel Edison on a breadboard. He was generating a signal, and sending it through a little tiny breadboard wire to an oscilloscope. The expected waveform should have been a nice square wave at 440MHz. What he got out of this wire was a mess. You shouldn’t use long wires when probing circuits. That little breadboard wire was a perfect radiator for 440MHz, and the entire setup turned into an antenna.

[Douglas] is running a Kenwood TM-D710A as his amateur radio rig. This radio does APRS stuff, but it requires an external GPS and power source to do it right. GPS receivers are now very small and very cheap, so [Douglas] just stuffed a GPS module inside his radio. The module itself is a GP-20U7, a tiny GPS module the size of a postage stamp, and wired it up to a few pads on the radio PCB.

Here’s an upcoming Kickstarter that’s going straight to the front page of Boing Boing. It’s Pong, in coffee table format which we first saw last Spring. Instead of racing the beam, this version of Pong is mechanical. The ball is a cube, the paddles are slightly longer cubes, and the entire game is a highly refined CNC machine. Here’s something from seven years ago that’s also Pong in coffee table format. Pongmechanik is electromechanical Pong, built entirely out of switches, relays, and a few motors.


Filed under: Hackaday Columns, Hackaday links

Sync Your Pocket Synth with Ableton

จันทร์, 02/06/2017 - 04:00

The Teenage Engineering Pocket Operators are highly popular devices — pocket-sized synthesizers packed full of exciting sounds and rhythmic options. They’re also remarkably affordable. However, this comes at a cost — they don’t feature MIDI connectivity, so it can be difficult to integrate them into a bigger digital music setup. Never fear, little-scale’s got your back. This Max patch allows you to synchronize an Ableton Link network to your Pocket Operators.

little-scale’s trademark is creating useful software and hardware devices using cheap, off-the-shelf hardware wherever possible. The trick here is a simple Max patch combined with a $2 USB soundcard or Bluetooth audio adapter. It’s all very simple: the Pocket Operators have a variety of sync modes that sync on audio pulses, essentially a click track. They use stereo 3.5mm jacks on board, generally using one channel for the synth’s audio and one channel for receiving sync pulses. It’s a simple job to synthesize suitable sync pulses in Ableton, and then pump them out to the Pocket Operators through the Bluetooth or USB audio output.

The Pocket Operators sync at a rate of 2 PPQN — that’s pulses per quarter note. little-scale says that KORG volcas & monotrons should also work with this patch, as they run at the same rate, but it’s currently untested. If you happen to try this for yourself, let us know if it works for you. Video below the break.

We’ve seen pocket synths on Hackaday before, with this attractive mixer designed for use with KORG Volcas.


Filed under: musical hacks

Olimex Announces Their Open Source Laptop

จันทร์, 02/06/2017 - 01:01

A few months ago at the Hackaday | Belgrade conference, [Tsvetan Usunov], the brains behind Olimex, gave a talk on a project he’s been working on. He’s creating an Open Source Hacker’s Laptop. The impetus for this project came to [Tsvetan] after looking at how many laptops he’s thrown away over the years. Battery capacity degrades, keyboards have a fight with coffee, and manufacturers seem to purposely make laptops hard to repair.

Now, this do it yourself, Open Source Hardware and hacker-friendly laptop is complete. The Olimex TERES I laptop has been built, plastic has been injected into molds, and all the mechanical and electronic CAD files are up on GitHub. This Open Source laptop is done, but you can’t buy it quite yet; for that, we’ll have to wait until Olimex comes back from FOSDEM.

The design of this laptop is completely Open Source. Usually when we hear this phrase, the Open Source part only means the electronics and firmware. Yes, there are exceptions, but the STL files for the PiTop, the ‘3D printable Raspberry Pi laptop’ are not available, rendering the ‘3D printable’ part of PiTop’s marketing splurge incongruent with reality. If you want to build a case for the Open Source laptop to date, [Bunnie]’s Novena, random GitHub repos are the best source. The Olimex TERES I is completely different; not only can you simply buy all the parts for the laptop, the hardware files are going up too. To be fair, this laptop is built with injection molded parts and will probably be extremely difficult to print on a standard desktop filament printer. The effort is there, though, and this laptop can truly be built from source.

As far as specs go, this should be a fairly capable laptop. The core PCB is built around an Allwinner ARM Cortex-A53, sporting 1GB of DDR3L RAM, 4GB of eMMC Flash, WiFi, Bluetooth, a camera, and an 11.6″ 1366×768 display. Compared to an off-the-shelf, bargain-basement consumer craptop, those aren’t great specs, but at least the price is consummate with performance: The TERES I will sell for only €225, or about $250 USD. That’s almost impulse buy territory, and we can’t wait to get our hands on one.


Filed under: computer hacks, laptops hacks, slider

Building A Wavetable Synth

อาทิตย์, 02/05/2017 - 23:30

Every semester at one of [Bruce Land]’s electronics labs at Cornell, students team up, and pitch a few ideas on what they’d like to build for the final project. Invariably, the students will pick what they think is cool. The only thing we know about [Ian], [Joval] and [Balazs] is that one of them is a synth head. How do we know this? They built a programmable, sequenced, wavetable synthesizer for their final project in ECE4760.

First things first — what’s a wavetable synthesizer? It’s not adding, subtracting, and modulating sine, triangle, and square waves. That, we assume, is the domain of the analog senior lab. A wavetable synth isn’t a deep application of a weird reverse FFT — that’s FM synthesis. Wavetable synthesis is simply playing a single waveform — one arbitrary wave — at different speeds. It was popular in the 80s and 90s, so it makes for a great application of modern microcontrollers.

The difficult part of the build was, of course, getting waveforms out of a microcontroller, mixing them, and modulating them. This is a lab course, so a few of the techniques learned earlier in the semester when playing with DTMF tones came in very useful. The microcontroller used in the project is a PIC32, and does all the arithmetic in 32-bit fixed point. Even though the final audio output is at 12-bit resolution, the difference between doing the math at 16-bit and 32-bit was obvious.

A synthesizer isn’t useful unless it has a user interface of some kind, and for this the guys turned to a small TFT display, a few pots, and a couple of buttons. This is a complete GUI to set all the parameters, waveforms, tempo, and notes played by the sequencer. From the video of the project (below), this thing sounds pretty good for a machine that generates bleeps and bloops.


Filed under: musical hacks

DIY Wire Spooler with Clever Auto-Tensioning System

อาทิตย์, 02/05/2017 - 22:01

[Solarbotics] have shared a video of their DIY wire spooler that uses OpenBeam hardware plus some 3D printed parts to flawlessly spool wire regardless of spool size mismatches. Getting wire from one spool to another can be trickier than it sounds, especially when one spool is physically larger than the other. This is because consistently moving wire between different sizes of spools requires that they turn at different rates. On top of that, the ideal rate changes as one spool is emptying and the other gets larger. The wire must be kept taut when moving from one spool to the next; any slack is asking for winding problems. At the same time, the wire shouldn’t be so taut as to put unnecessary stress on it or the motor on the other end.

There aren’t any build details but the video embedded below gives a good overview and understanding of the whole system. In the center is a tension bar with pulleys on both ends though which the wire feeds. This bar pivots at the center and takes up slack while its position is encoded by turning a pot via a 3D printed gear. Both spools are motor driven and the speed of the source spool is controlled by the position of the tension bar. As a result, the bar automatically takes up any slack while dynamically slowing or speeding the feed rate to match whatever is needed.

There’s nothing like facing down a repetitive and lengthy task to get a good solution brewing. This reminds us of another DIY automation solution to a wire-related need: a contraption to cut 1,000 pieces of wire.


Filed under: tool hacks

Grant Anyone Temporary Permissions to Your Computer with SSH

อาทิตย์, 02/05/2017 - 19:01

This is a super cute hack for you Linux users out there. If you have played around with SSH, you know it’s the most amazing thing since sliced bread. For tunneling in, tunneling out, or even just to open up a shell safely, it’s the bees knees. If you work on multiple computers, do you know about ssh-copy-id? We had been using SSH for years before stumbling on that winner.

Anyway, [Felipe Lavratti]’s ssh-allow-friend script is simplicity itself, but the feature it adds is easily worth the cost of admission. All it does is look up your friend’s public key (at the moment only from GitHub) and add it temporarily to your authorized_keys file. When you hit ctrl-C to quit the script, it removes the keys. As long as your friend has the secret key that corresponds to the public key, he or she will be able to log in as your user account.

There’s really nothing going on here that you couldn’t do by hand. The script simply automates creating and removing the public key, and uses GitHub as an arbiter of your friend’s identity. If you know their GitHub user name for sure, and they have attached their public key to the account, this is a very streamlined and simple procedure. (We tried it out ourselves, only to find that we hadn’t associated a public SSH key with our account.) That said, it can be extended to any trusted location that serves up public keys.

Looking through our musty archives for an article on SSH tips and tricks, we couldn’t find one. Is that possible, commenteers? That’s something we’ll have to get right on top of so send us your favorite SSH magic. In the mean time, we’ll placate your lust for more Hackaday by tossing up [Al Williams]’s nice writeup of MOSH, a mobile SSH client.


Filed under: internet hacks, linux hacks, slider

Two Guitars, Two Amps, And Three Pole Dual Throw

อาทิตย์, 02/05/2017 - 16:01

[Alexbergsland] plays electric guitar. More accurately, he plays two electric guitars, through two amps. Not wanting to plug and unplug guitars from amps and amps from guitars, he designed an AB/XY pedal to select between two different guitars or two different amps with the press of a button.

The usual way of sending a guitar signal to one amp or another is with an A/B pedal that takes one input and switches the output to one jack or another. Similarly, to switch between two inputs, a guitarist would use an A/B pedal. For [Alex]’ application, that’s two pedals that usually sell for $50, and would consequently take up far too much room on a pedalboard. This problem can be solved with a pair of 3PDT footswitches that sell for about $4 each. Add in a few jacks, LEDs, and a nice aluminum enclosure, and [Alex] has something very cool on his hands.

The circuit for this switcher is fairly simple, so long as you can wrap your head around how these footswitches are wired internally. The only other special addition to this build are a trio of LEDs to indicate which output is selected and if both inputs are on. These LEDs are powered by a 9V adapter embedded in the pedalboard, but they’re not really necessary for complete operation of this input and output switcher. The LEDs in this project can be omitted, making this a completely passive pedal to direct signals around guitars and amps.


Filed under: musical hacks

Bantam-sized Jukebox Reproduction Tops the Fabrication Charts

อาทิตย์, 02/05/2017 - 13:01

Restoring a genuine vintage jukebox is a fun project, but finding suitable candidates can be a difficult proposition. Not only can a full-size machine take a huge bite out of your wallet, it can take up a lot of room, too. So a replica miniature jukebox might be just the thing.

We have to admit, at first glance [Allan_D_Murray]’s project seemed like just another juke upgrade. It was only after diving into his very detailed build log that we realized this classic-looking juke is only about 30″ (80 cm) tall. It’s not exactly diminutive, but certainly more compact than the original Wurlitzer 1015 from which it draws its inspiration. But it sure looks like the real thing. Everything is custom made, from the round-top case to the 3D-printed trim pieces, which are painted to look like chrome-plated castings. The guts of the juke are pretty much what you’d expect these days — a PC playing MP3s. But an LCD monitor occupies the place where vinyl records would have lived in the original and displays playlists of the albums available. There’s an original-looking control panel on the front, and there are even bubblers in the lighted pilasters and arches.

Hats off to [Allan] for such a detailed and authentic tribute to a mid-century classic. But if a reproduction just won’t cut it for you, check out this full-size juke with the original electronics.


Filed under: classic hacks, musical hacks

Quick and Easy IoT Prototyping with Involt

อาทิตย์, 02/05/2017 - 10:00

IoT, web apps, and connected devices are all becoming increasingly popular. But, the market still resembles a wild west apothecary, and no single IoT ecosystem or architecture seems to be the one bottle of snake oil we’ll all end up using. As such, we hackers are keen to build our own devices, instead of risking being locked into an IoT system that could become obsolete at any time. But, building an IoT device and interface takes a wide range of skills, and those who are lacking skill in the dark art of programming might have trouble creating a control app for their shiny new connected-thing.

Enter Involt, which is a framework for building hardware control interfaces using HTML and CSS. The framework is built on Node-Webkit, which means the conventions should be familiar to those with a bit of web development background. Hardware interactions (on Arduinos) are handled with simple CSS classes. For example, a button might contain a CSS class which changes an Arduino pin from high to low.

Involt can take that CSS and convert it into a function, which is then sent to the Arduino via serial or Bluetooth communication. For more advanced functionality, Javascript (or really any other language) can be used to define what functions are generated — and, in turn, sent to the Arduino. But, all that is needed for the basic functionality necessary for many IoT devices (which might only need to be turned on and off, or set to a certain value) is a bit of HTML and CSS knowledge. You’ll create both the interface and the underlying hardware interactions all within an HTML layout with CSS styling and functionality.

While Involt isn’t the only framework to simplify hardware interaction (it’s not even the only Node.js based method), the simplicity is definitely laudable. For those who are just getting started with these sorts of devices, Involt can absolutely make the process faster and less painful. And, even for those who are experienced in this arena, the speed and efficiency of prototyping with Involt is sure to be useful.


Filed under: Arduino Hacks

Plywood Steals the Show from Upcycled Broken Glass Art Lamps

อาทิตย์, 02/05/2017 - 07:00

You can tell from looking around his workshop that [Paul Jackman] likes plywood even more than we do. And for the bases of these lamps, he sandwiches enough of the stuff together that it becomes a distinct part of the piece’s visuals. Some work with a router and some finishing, and they look great! You can watch the work, and the results, in his video embedded below.

The plywood bases also hide the electronics: a transformer and some LEDs. To make space for them in the otherwise solid blocks of wood, he tosses them in the CNC router and hollows them out. A little epoxy for the caps of the jars and the bases were finished. Fill the jars with colored glass, and a transparent tube to allow light all the way to the top, and they’re done.

Quality plywood is rad. It’s strong, it doesn’t warp, and it looks great edge-on. If [Paul]’s project doesn’t convince you, perhaps you’d like to check out [Gerrit]’s love-letter to the material, and pick some up next time you’re in the big box.


Filed under: home hacks