Quantcast
Channel: LEKULE
Viewing all 1099 articles
Browse latest View live

Exploring the Future of Design in Autonomous Vehicles: An Interview with Mark Forbes of Altium

$
0
0
All About Circuits recently met with Mark Forbes, Director of Product and Persona Marketing at Altium, to discuss the details behind autonomous vehicles including the challenges of unifying safety standards and why embedded software is so important to this booming field.
An electrical engineer by trade, Mark has been in the electronic design automation industry for 30 years. Within that time, he's patented concealed antenna products and configurable portable HF antennas and worked specifically with TASKING, a subsidiary of Altium that specializes in embedded software design.


Mark Forbes, Altium.

He sat down with AAC's Karissa Manske to discuss what the view of autonomous vehicles is from Altium's perspective in terms of safety standards and challenges for design engineers.

All About Circuits: Safety standards for autonomous vehicles have been changing as rapidly as the technologies that go into them. What do you think are the biggest challenges design engineers face as they work to comply with all of them?
Mark Forbes: Well, you definitely hit the nail on the head with that question—these standards evolve quickly and keeping up is critically important. Let me phrase it this way: engineers I talk to view safety standards as another constraint. Each time you start a new project, you have a list of constraints including cost, time, memory, and so on. Safety standards are on that list.
In the back of an engineer’s mind who is writing the software is “if I make a mistake, I could hurt someone.” So design engineers embrace safety standards because they know that mistakes could propagate and cause injury or even death. From the outside, it can look like all of these standards are obstacles to the design process but they’re actually the points you design to.


Image from Altium.

AAC: With companies coming up with their own proprietary products for autonomous vehicles, who do you think is going to come out on top and produce the most technology that we’ll use in standards across the board?
MF: I think the main thing we’ll see is a lot of mergers and spinoffs. No one has all of the technology. No two companies have all of the technology.
Intel recently acquired Mobileye, who created algorithms for decision making based on visual data. Then, they partnered with BMW to begin creating vehicles. We’re going to see a lot of this soon. The knowledge companies have been acquiring for decades is now being applied in a different way toward automotive applications.

AAC: As these advancements take place, what other challenges do design engineers face in regards to all of the proposals, developments, and adoptions of autonomous vehicle technology?
MF: There are several challenges. One of them is, of course, the standards we’ve already talked about. Another is (and this happens over and over again in electronics) is that different companies are coming up with different ideas, and they’re all arguing about what is necessary and what isn’t.
Things need to be closely monitored. If a company becomes too invested in one idea and the market goes the other way, they’ll find themselves in trouble. It’s so important to not only pay attention to the standards but also to the evolution of the methods to accomplish tasks with innovative solutions.
From a safety standpoint, I think the way it has to go is for certain systems to be the standard. For instance, we may decide that every car will incorporate five types of sensors to process the data in a certain way so all decisions are consistent. Then, there will be a much more concrete definition of what these systems will look like, what type of data they will exchange, and how they will make different decisions.

AAC: Do you see any holes or inadequacies in the safety standards for autonomous vehicles currently in place?
MF: At this point, no. There are certainly holes that are going to open up because every day someone is inventing a new way to do something. We all want to think that from the first autonomous car, no one is going to be injured because of some technological glitch, but—statistically speaking—that’s just not going to happen. It’s a learning process.
Take airplanes as an example. When airplanes first began flying in the 1930s, you really took your life into your own hands each time you got into an airplane. One of the first accidents that happened was one everyone had thought of but also had said, “this will never happen.” That accident was two planes colliding midair over Arizona. Up until that point, airplanes were not controlled. They just took off anytime because what are the chances of two airplanes hitting each other in the sky?
I think the same thing will happen with autonomous cars. There is going to be the possibility of something that seems so remote going wrong (though, under certain circumstances, it actually won’t be that remote at all). These vehicles will run into instances that their software may have never anticipated. Luckily, with embedded software, autonomous vehicles will learn how to handle these situations and, as they do, that proper handling will propagate to other vehicles through universal updates. As holes and inadequacies pop up, we’ll be able to solve them much more quickly than in the past.

AAC: There's a lot at stake when it comes to test and measurement with autonomous vehicles. We can't help but think of the "Pentium flaw" that was discovered back in the 1990s where a computing error in Intel transistors was predicted but unaddressed—and resulted in unreliable CPUs and a $475 million recall. Can you talk a bit about the importance of foreseeing even abstract issues in the context of automotive safety?
MF: Years ago, when I was a magazine editor, I had the opportunity to interview Dennis Ritchie. I got to go to Bell Labs and meet a bunch of the people there, including one of the fellows who had just published a book stating that the microprocessors of the time (around 25 years ago) had reached a scale that made them impossible to test. That scale has obviously been way exceeded, and so statistical methods exist, but—much like what happened with the Pentium flaw—something tiny can be overlooked.


A 66MHz Intel Pentium CPU with the FDIV bug (or "Pentium flaw"). Image courtesy of Konstantin Lanzet (CPU Collection Konstantin Lanzet) [CC-BY-SA-3.0]

The chance of something similar happening in the automotive industry is not zero. The way things are tested now in comparison to 20+ years ago has evolved so much from the days where you tested every single function of every single pin and every combination—because it’s physically impossible to do that anymore.
Could a mass-scale crisis unfold? The possibility certainly exists. But, as I mentioned earlier, the fact that so much of an automobile will be controlled with embedded software instead of mechanically, it will be much easier to correct such problems rapidly. Now, that’s not to say there will always be a software workaround because sometimes that’s just not possible.
Are we missing anything? That’s definitely a question that needs to be on everyone’s mind.

AAC: As companies like Google and Apple enter into the automotive space with autonomous and assisted driving, how do you see that affecting the future of autonomous and electric vehicles?  
MF: That’s a great question because companies like Google and Apple are what we call disruptors. They’re non-traditional companies that don’t have a set of rules they’ve been operating under for 100 years. Automotive companies produce what you expect. They’ve introduced evolutions, certainly, but they’ve always produced automobiles.
To see external companies that have never had a finger in transportation come into the arena is interesting. Google and Apple basically invented user experience. Before them, companies had never really talked about that before. In the auto industry, that’s a relatively new concept. I remember, when I was a kid, there was a thriving business that produced plastic cup holders because there weren't cars on the market that came with cup holders! How do you miss that part of the experience?​​


The Director of Growth at Voyager (a company that builds self-driving taxis) reveals one of the first images of Apple's autonomous vehicle

The development of cars has been pretty linear for decades, so this is going to be huge. I compare it to the introduction of compact discs. When CDs came out, they were so much better than vinyl records that things changed almost overnight. I remember playing my first CD and thinking that CDs would completely wipe out vinyl records in the next ten years—it wasn’t ten years. In about ten months, sales had completely flopped over to CDs and the acceptance was incredibly fast because the design and technology were so much better.
I think that’s what we’re going to see with autonomous cars. Not only because they’re autonomous, but because the whole experience is going to change and be valuable to the user. You won’t just sit there and drive for eight hours, losing out on that time. You can work, hold a conference call, or sleep while the car does all of the work.
The fact that Google and Apple, as well as a host of other companies like Uber and Lyft, are getting into this industry means that what we see in five years is going to be really difficult to predict but really exciting to see.
The fact that Google and Apple... are getting into this industry means that what we see in five years is going to be really difficult to predict but really exciting to see.

Q: Out of curiosity, how long do you think it’ll be before we have fully autonomous cars?
MF: It’s tough to say. I think the limiting factor is really how safety standards evolve. You can’t write a law for something that doesn’t exist. Uber’s recent experience in California showed us that. The state of California thought they were being reasonable and Uber disagreed. Rather than exploring options and having open communication, Uber moved over to Arizona. These things will have to evolve as they happen. Until we know we need a law covering something, it won’t happen. We’ll see growth coming in spurts and recession as we exceed our ability to control the growth. That’s generally how things come to fruition.
As far as a fully autonomous car are concerned, the numbers are out there. They change periodically as companies claim different goals, but I think between 2020 and 2023 we should see commercial autonomous driving cars available. That’s really not too far off, and I think a lot of people are thinking “hmmmm, I’ll let some other people ride in these cars first to gauge whether I want to get in one or not.”

AAC: So Altium acquired Tasking some time ago, a company that specializes in embedded software design. When it comes to autonomous vehicles, what embedded software design tools do you think are most useful for designers?
MF: Altium acquired Tasking in 2001. From that initial acquisition, we've continued to update the compilers and, in the past 18 months, we added some peripheral tools to help embedded software developers.
Take, for example, our safety checker, which analyzes your code statically and determines whether any memory violations or sections of code exist that are not supposed to be getting into the protected memory. A designer can go through the code by hand and find the same things as our tool, but the amount of time involved is ridiculous and human beings still make mistakes. Saving that time and heightening accuracy is the impetus of where our tools come from.
The Profiler is another tool we offer, which allows users to make adjustments to the code so they can optimize for speed or size at the right mix. We've also released a standalone debugger. You can purchase additional licenses of this debugger so there isn't a bottleneck at a project deadline when everybody suddenly wants to debug at the same time.



Linear Algebra PACKage libraries, on the other hand, have been around since 1992. They have been used and proven for a variety of compute-intensive applications like digital signal processing. Not only will these functions be more important with more sensors and fusion of that sensor data, but the fact that they have been verified correct for decades enhances safety as well. While that may not be as important immediately, as we get more and more sensors and data, LAPACK will become more essential.
Our goal is to provide a complete set of developments tools that help accelerate the design process—and ensure it's being done safely. At this point, we have a pretty complete set of tools that cover the majority of problems design engineers run into. The reason we’ve developed a number of these tools is that customers came to us with specific problems. We work to offer tools to help designers solve problems in a more efficient way.

AAC: Last question. What are some of the most surprising change you’ve seen in this industry?
MF: The compute power of cheap memory has made so many things possible that weren’t in the past. Many of the things—such as cell phones—were thought up long ago but lacked the ability to be created without cheap, accessible memory.
You ask about the changes I’ve seen in the industry, but I think the more surprising thing is what has remained the same. The first project I ever worked on used the Intel 8051 single chip microcontroller. That is still used! Time has cemented that there are certain constraints that may always exist. This can be seen in the automotive industry where you’re limited in space, cost, and power. All three of those things are critical to think about when you’re designing.


Thank you to Mark for his time and insights!

What Is Digital Twinning? Bringing Industrial IoT Sensors and VR Together

$
0
0
Gartner predicts that 50% of large industrial companies will use digital twins by 2021. What is digital twinning and why is it important in the development of the industrial IoT?

Gartner, a tech industry advisory company, recently released its "top 10 strategic technology trends for 2018". Among them were digital twins, a repeat trend also included in 2017's forecast, which Gartner advises will become an increasingly important part of the industrial IoT. The company predicts that 50% of large industrial companies will use digital twins by 2021.
What is a digital twin and why will it be more important in the year to come?

What Is a Digital Twin?

In the most basic sense, a digital twin is the digital version of a physically existing object. For engineers or technicians that work with things like CAD modeling, this is already a very familiar concept. Using these models, tests can be conducted to gather information on behavior.
What makes a digital twin slightly different from just a regular CAD simulation is that the physical twin exists, perhaps as part of an IoT network, where it gathers physical data in real time, feeds it back to the simulation, which then uses that data to improve its simulation.
Having a digital twin of a physical object also provides opportunities for monitoring, troubleshooting, or data acquisition for better iterative designing. More accurate tests can also be conducted without the cost of having to build a physical replica—something that is especially valuable in industries where production is costly (aerospace, for example).

Digital Twin Real-World Applications

Virtual Reality

Siemens is utilizing the concept of digital twins in one of its recent gas turbines with over 500 sensors are installed. The sensors provide detailed information on temperature, pressure, and motion. Engineers are then able to visualize the turbine in a virtual reality environment, where they can inspect the turbine and run tests based on its twin’s provided data. Siemens engineers can even immerse themselves in the digital twin by wearing VR goggles.

A turbine being visualized in a virtual reality enivornment. Image courtesy of Siemens.

This has enabled virtual and remote maintenance, as well as allowing team members who are all remote from one another and the turbine in question, to work together on the digital twin.


Smart Cities

There are three companies that currently work with designing digital twins for smart cities: GE Digital, Toshiba IoT, and Dassault Systèmes. The latter, Dassault Systèmes, has partnered with Singapore to create a digital twin of the city. Similar to how Google street view works, the digital twin of Singapore allows users to traverse the city from an aerial point of view.


Information on the dimensions, materials, and engineering specs of buildings and infrastructure can be made available, as well as information rental rates. It is hoped that the digital twin of a city like Singapore can afford policymakers and designers better insight for everything from economic policies, safety, efficiency, and environmental concerns.

Wind Farming

GE is making use of digital twins in its wind farm systems. Each wind farm turbine is fully equipped with sensors and software that connects online to provide real-time data. This data is then used to visualize a fleet of wind turbines, where information from efficiency to required maintenance is provided. This allows operators insight to make decisions about future repairs and design choices.
GE reports that the digital twinning of their wind turbines has increased annual energy productions by 16% for their customers.




Sensors connected to the industrial Internet of Things are capable of collecting more data than ever. Feeding that information into digital models of physical devices allows in-depth analysis, experimentation, and optimization—without disturbing the physical environment. This allows iterative design improvements to take place quickly and at minimal cost.
Beyond the industrial IoT, how will we see digital twinning spread in 2018? Where do you see the most potential? Share your thoughts in the comments below.


Featured image courtesy of Siemens.

Increasing Data Demands Pushing New Data Storage Technologies

$
0
0
An uptick in interest in cryptocurrencies has also increased interest in fast, high-capacity, and especially secure data storage. Here are some developments in the data storage realm.
IBM recently stated that 2.5 quintillion bytes of data are created every day and that 90% of the world’s data has been created within the past several years.
A lot of focus has been put on processing that data and speeding up computation with more sophisticated algorithms or high-performance computing. One aspect of this issue that is sometimes overlooked is the technology being used to store the massive amount of data now being collected.
Engineers and designers developing devices for many applications are challenged by the issue of data storage. The increasing importance of power efficiency, fast read/write operations, reliability, capacity, and economy (how much each GB costs) are factors that change how we look at data storage.
As the IoT spreads and enables more in-depth data collection, we've seen improvements in extent memory technologies. SSDs, or Flash Memory, are low-power and compact data storage options that lean towards the more expensive side where more traditional HDDs give you more bang for your buck on capacity (but possibly less reliable because of the moving parts). In the more advanced realm, designers and manufacturers may be acutely aware of component supply issues, such as the current scarcity of NAND flash memory storage.
Let's take a look at some of the hardware associated with memory storage, both what's available and what's coming down the pipe.

Multi-Actuator Technology

A render of Seagate's multi-actuator HDD, which consists of two independent actuators. Image courtesy of Seagate

Seagate is exploiting the concept of “parallelism” to increase read/write speed in traditional hard-drives. They accomplished this by using multi-actuator technology, in which two actuators with 16 read/write heads work independently to carry out operations. Both can be reading or writing at the same time or carrying out the opposite task. This doubles the speed for these operations.
Seagate has previously explored multi-actuator solutions for HDDs, but initially found it to be an ineffective solution due to increased complexity in design, increased materials required for manufacturing, and increased weight. However, HDDs are now cheap enough and come in high enough capacities that the added complexity of dual actuators becomes a viable solution.
The company sees multi-actuator HDDs being used in data centers, artificial intelligence, and IoT applications.

MAMR: Microwave-Assisted Magnetic Recording

MAMR increases coercivity of the disc being written to, allowing for higher granularity. Image courtesy of Western Digital Corp.

MAMR (microwave-assisted magnetic recording), invented by ECE professor Jimmy Zhu from Carnegie Mellon and developed/manufactured by Western Digital Corp, is a solution to expand capacity in HDDs. MAMR is said to be able to store 4TB of data in 1 square inch of space, with an expectation that HDDs can reach a capacity of 40TB by 2025.
This is made possible by a spin torque oscillator which records data at a high precision and density using generated microwave fields.
Western Digital is targeting big data applications and data centers with their materials based capacity solution.

HAMR: Heat-Assisted Magnetic Recording

Writing head for an HAMR HDD. Image courtesy of Regmedia.

HAMR (heat-assisted magnetic recording) is another capacity-focused solution, meant to increase density in HDDs. Just as the nomenclature suggests, HAMR uses a laser to heat the area of the disk which will have data written to it. This heating makes the material easier to magnetize so that data can be written in much finer resolution, increasing data density.
This technology takes advantage of a concept called coercivity, which is the ease in which magnetization can be changed. Before HAMR, the only way to increase coercivity and therefore increase the resolution of data writing, was to use an already highly coercive material. The coercivity of material is also related to temperature, which is where the heating laser comes in.
HAMR could possible write data at a density of 50TB per square inch and the required laser uses only a few miliwatts of data. This makes it an interesting high-capacity solution for many fields.
No HAMR HDDs have become commercially available yet since it seems that there are still design and manufacturing challenges. Seagate demonstrated HAMR HDDs being used in data servers 2015 and announced the first HAMR HDDs to be available in late 2018.

Helium Filled Harddrives

Helium-filled HDD takes a different approach to expand the capacity of hard drives. Photo by Lenny Sharp courtesy of Sandisk.

Helium-filled HDDs are another capacity solution, but in a different vein from HAMR and MAMR—instead, helium is used to cushion vibrations, smooth out movement among discs in an HDD, and cool the HDD, which allows for more discs to be stacked and used without issues. This also allows for more precise and granular data writing. The discs used are also thinner, providing even more opportunity to increase data density.
Helium is thinner than air, which is why it makes a good cushion and dampener. It's also fairly cheap to use. HGST, a subsidiary of Western Digital, has been developing helium HDDs for years, with consumer-available versions released in 2016. Seagate also offered their own helium-filled HDDs the same year.



Have you worked with any of these data storage technologies? Which do you think holds the most promise for your design needs? Share your experiences in the comments below.


Feature image courtesy of Seagate.

STMicro’s Newest Stepper Motor Driver: A 256-Microstep Driver with Integrated Control Logic

$
0
0
STMicroelectronics offers their new microstepping motor driver that includes control logic and a power stage.

ST recently announced their new STSPIN820, which is an advanced microstepping motor driver that is capable of a stepping resolution of 1/256th of a step and integrates both the control logic and power stage in a small 24-pin QFN 4 × 4 mm package. Seems impressive.


Figure 1. STSPIN820 microstepping motor driver. Image taken from the datasheet (PDF).

With a wide operating voltage range of 7 to 45 V, a host of protection features, and a low standby current of 45µA, this IC could be a good choice for applications such as 3D printers, sewing machines, and robotics.

A "Low" RDSon Value

This motor driver's integrated power stage is advertised as having a low RDSon value (high side + low side) of 1Ω (typical). It seems to me, however, that 1Ω isn't really all that low. In fact, in my dealings with motor drivers—granted my design experience in this regard is not exceedingly extensive—motor drivers with low RDSon values are in the range of hundreds of milliohms. So I would suggest that 1Ω might be better characterized as a typical RDSon value; if you disagree, let us know in the comments.


Figure 2. The STSPIN820’s "low" RDSon values, from the datasheet (PDF).

External Sense Resistors

As noted in the datasheet, this IC requires two external sense resistors (RSNSA and RSNSB), and because these resistors play an important role in the PWM current control—for both slow decay and mixed decay current recirculation conditions—their values must be chosen carefully. Fortunately, ST has provided values for these resistors in a typical application (see image below), and they suggest values for other external passive components. They also offer a trick for easily achieving lower resistor values with higher power ratings; this trick—though it's probably, hopefully, common knowledge for any electrical engineer—is simply placing multiple resistors in parallel. But hey, maybe ST thought that we might forget about this technique in this particular situation...so thanks, ST, for the reminder.


Figure 3. Recommended RSENSE resistor values and other recommended values for external components. Image taken from the datasheet (PDF).

Lately I’ve noticed datasheets that don’t offer much in the way of straightforward design guidance, so it’s good to see that ST has taken the time to provide this information.

A Set of Protection Features

The STSPIN820 comes with multiple protection features. Though not exactly uncommon among high-end (or even low-end) ICs these days, these features are nonetheless important additions to a reliable, user-friendly motor driver.
  • Short-circuit or overcurrent: Each power output node is protected against overload/overcurrent and short-circuit conditions, including short-to-ground, short-to-VS, and short-circuit between outputs. When an overcurrent condition occurs, the power stage is disabled and the Fault pin is driven low until the overcurrent condition is rectified (see image below).


Figure 4. Overcurrent and short-circuit protection management. Diagram taken from the datasheet (PDF).

  • Undervoltage lockout (UVLO): During power up, the power stage is disabled—and the Fault pin is forced low—until the voltage on the VS pin (actually, there are two VS pins and they must be at the same voltage) rises above the VS threshold voltage (VSth(ON)), which is 6.0 V (typical). The image below shows the undervoltage lockout protection scheme.


Figure 5. Undervoltage lockout (UVLO) protection management. Diagram taken from the datasheet (PDF).

  • Thermal shutdown: When the IC's junction temperature reaches its threshold (TjSD = 160°C), the power stage is disabled and the Fault pin is driven low. Once the junction temperature retreats back to less than 120°C (which includes the thermal shutdown hysteresis value of 40°C), the fault condition is removed (see the image below).


Figure 6. Thermal shutdown protection management. Diagram taken from the datasheet (PDF).

The Microstepping Sequencer

For this device, ST uses three Mode inputs for achieving a stepping range from full-step to 1/256th of a step. As shown in the image below, each of the three mode settings (MODE1, MODE2, and MODE3) are clocked in on the rising edge of STCK (step clock input). And to allow for a very quick stepper motor response, the three mode settings can be changed at any time, and these changes are applied immediately.


Figure 7. Step mode configuration using the MODEx inputs, from the datasheet (PDF).


Have you had a chance to use this new stepper motor driver from ST? If so, leave a comment and tell us about your experiences.

Who is Driving Autonomous Cars Anyway?

$
0
0
In this Industry Article, Mark Forbes of Altium explores the problems, solutions, and safety standards surrounding autonomous vehicles.
As we are on the threshold of becoming more “passenger” than “driver” in our vehicles, the question of “how safe is this ‘thing’ driving me around?” starts nagging the neurons. We all acknowledge intellectually that in most situations, automation can probably make the correct decision more quickly than a (potentially distracted) human driver. But what if it fails?
Is the hardware that’s being designed and manufactured to be the “smarts” of autonomous vehicles designed for safety first? The hardware must not only be reliable but also able to detect faults and take corrective (or emergency) actions. The hardware must also assure that protected memory, used for safety-critical purposes, is never touched by unauthorized requestors.
Are embedded software developers paying enough attention to safety concerns? I’m certain that the overwhelming majority make it a top priority. However, there are some standards that can help developers assure that their code is not only safe as written but is likewise safe after compilation and when it runs on the target hardware.

Some Vehicles Have Been Autonomous for Years

Because of all the press surrounding the soon-to-be-realized autonomously driven cars, many people think that is the first foray into self-driving vehicles. In fact, they have been around for quite a while, just not in consumer environments.


Figure 1. Autonomous vehicles like this self-driving lawn mower have been around for some time. Their operating environment is much different from what autonomous cars will encounter.

John Deere began the foundational work for self-driving tractors in the 1990s and had tractors making turns by themselves by 2000. Now, the tractors are capable of complete autonomous operation, including plowing and harvesting, in addition to making turns. Other autonomous vehicles in widespread use are the “product pickers” used by many online retailers and wholesalers. Amazon, for example, operated more than 45,000 of these robots at the beginning of 2017.

Your Next Car?

Despite the decades of work by robotics and tractor companies, your next car is not likely to be self-driving. There is a laundry list of items that have very different problems than the autonomous vehicles mentioned earlier, but at the top of that list is one simple fact – warehouses, your home, and farms are private, controlled access environments, whereas automobile travel entails a completely different set of problems to address:
  • Traffic: Of course, traffic is the most frequently encountered problem that doesn’t affect tractors and robots. Cars must first detect and then determine what to do in response to the sensed information.
  • Signal interference: Out in the big world, things like buildings and trees can interfere with sensor signals and create unexpected results.
  • Hacking: Secure internet communication is necessary for autonomous cars for applications such as over-the-air software updates. Hacking could potentially be deadly.
  • Unexpected experience: Perhaps the most nefarious problem is what decision should be made when a new, unexpected experience occurs. In a tractor or robot, it could just stop, but in a car, that might not be the safest thing to do.
All of these existing problems must be addressed for one reason: people are riding in the cars so safety has to be the absolute number one priority.
For decades, cars have had integrated electronics and embedded software for things such as powertrain control, transmission control and fuel injection. Now, Advanced Driver Assistance Systems (ADAS) such as intelligent cruise control and lane departure warnings are beginning to show up in vehicles, which are baby steps toward autonomy. While there are safety issues that have been dealt with in powertrain control, the risk to human life that ADAS failures could cause has driven several standards to be developed to mitigate that risk as much as possible.

Standards Ensure Safety and Interoperability

The primary standard relating to autonomous automobiles is ISO 26262. Its purpose is to ensure proper safety management throughout automotive systems. Part of ISO 26262 is a prioritized set of Automotive Safety Integrity Level (ASIL) classifications. The ASIL level classification is set by the severity of the harm that could result from a failure of the hardware (or software controlling the hardware) in different scenarios, and run from A (least stringent) to D (most stringent).
Compliance with ISO 26262 requires that hardware and software be designed for safety, such as using fallback paths, self-monitoring and redundancy. Hardware is required to be tested according to the ASIL level in which it is operating. Hardware or software that does not impact any safety-critical function (for example the GUI for a car’s touch screen) have a lower ASIL classification than ADAS functions that can have a significant impact on safety. Because ADAS functions make high-level decisions for the driver, they usually must be certified to a high ASIL classification, such as ASIL-D.


Figure 2. ASIL compliance levels assess the potential danger of a failure. Level A has marginal safety issues whereas Level D is likely to result in multiple fatalities and major injuries.

That means that getting safety certification of an ADAS product can take quite a long time. The logical way to reduce the time and cost to get certification is to use hardware that has been pre-certified and software development tools (such as compilers and performance libraries like LAPACK) to make full-system-level ISO 26262 certification more expedient and raise the level of safety.
There is a standard for software development as well. Automotive SPICE (ASPICE) is a standard framework for designing and assessing automotive software development processes. Effective implementations lead to better processes and better product quality.


Figure 3. Using safety-certified hardware and software development tools can expedite system certification.

One other standard that bears mentioning is IEEE 2020. This standard is currently in draft form. It specifies methods and metrics for measuring and testing the quality of automotive images to ensure consistency and create cross-industry reference points. This is a future-looking standard that is not yet published but will have an important impact on ADAS functions that rely on cameras, image processing, computer vision and other vehicle perception technologies.

Choosing Hardware and Software with Standards in Mind

When choosing hardware and software for automotive applications, especially ADAS applications that require stringent safety certification, the best solution is to rely on proven suppliers with products optimized for the application.
For automotive and especially ADAS systems, it is critical to use processors and other hardware specifically designed for the safety and power consumption constraints. There are several proven processor options for automotive applications. These are all ISO 26262-compliant and make excellent choices. For ADAS applications, most contain multiple cores for both safety and for offloading specific duties to co-processors.
Multiple-core processors can significantly increase safety as well as improve performance. Some have hardware safety cores integrated to ensure memory integrity. Some also have DSP capabilities for processing sensor data. Choosing a targeted processors that is already ISO 26262-compliant can greatly simplify your design as well as save time when it comes to certification.
While at first glance, “a compiler is a compiler” might seem true, but it certainly is not. All compilers make choices about the code to generate from the C/C++ input. With the myriad of built-in features and cores in automotive-oriented processors, it’s important that the compiler be aware of those features and produce code that is optimized for those features.
Choosing a compiler that is ASPICE-certified at Level 2 means that you can rest assured that the product has been developed according to processes that are proven and enable you to meet the required safety standards, which means your application is more likely to be error-free. Not only does using an ASPICE-certified compiler result in better code, it can also save money.

Standards: Too Limiting?

I’ve been asked the question “Do standards limit the creativity of engineers?” Frankly, I believe it to be just the opposite. Standards are like any other design constraint: they take the ambiguity out of the design and let the designer focus on the problem and how to creatively solve that problem within the scope of the constraints.
When safety and operational standards become constraints, it is abundantly clear what performance levels must be achieved. There is no need to define those constraints that have already been defined by a standards committee — that time is now available to define and design creative solutions. Using ISO 26262-compliant hardware and ASPICE-certified compilers also gives time benefits to designers.
All-in-all, the automotive safety standards in place, as well as those being developed, are a key to achieving safer, more efficient and eventually self-driven vehicles.

Cover image courtesy of Steve Jurvetson.

Blockchain’s Applications Extend Beyond FinTech and Cryptocurrencies

$
0
0
Cryptocurrencies and accompanying FinTech (financial technologies) are all over the news. The underlying system that supports these cryptocurrencies is called the blockchain—and it's got the potential to completely change how many industries handle information.
You’ve most certainly have heard about the hype surrounding cryptocurrencies such as Bitcoin (BTC) hitting records for all-time highs and turning “imaginary” digital money into major assets for those who have invested in it.
A cursory online search and you will find many opposing viewpoints on the value of cryptocurrency—whether it’s a bubble, say, or if it’s the future of money. However, all this hype around cryptocurrency has caused a lot of people to look past the actual underlying technology: the blockchain.
Blockchain technology has a variety of applications beyond just cryptocurrency; it can be used in healthcare, supply chain management, and even cloud storage. Whether you think Bitcoin is an investment bubble that’s about to burst or if you believe cryptocurrencies are the future of finance, likely blockchain will continue to evolve into a technology we’ll all use in some form or another in the future.
Let's take a look at how blockchain technology could affect you in the not-too-distant future, even if you're not investing in cryptocurrencies.

What Is Blockchain?

Blockchain is a decentralized “ledger” technology—in extremely basic terms, a blockchain is a record of transactions. This record is visible and accessible by a decentralized, peer-to-peer network. Every time a transaction occurs, the network will either "agree" or "disagree" with it. If more than 51% of the network agrees the transaction has happened, then that transaction is permanently added to the blockchain and all nodes in the network will update to reflect that. This is why mining can be a profitable venture: a mining fee (or a small payment) is given in exchange for verifying and adding blocks to the blockchain.
The hashing function to encrypt the next block in the chain comes from the most recent block on the chain, which is hashed by the block before that, and so on. This aspect also makes it difficult to falsify blocks, since to successfully do so requires going all the way up the chain and having the network confirm it.

Blockchain infographic courtesy of BlockGeeks.

This also prevents double spending from occurring, since all nodes will see whether or not the block in question has been spent or not.
There is also a privacy aspect that affects cryptocurrencies. Each transaction in the blockchain is committed by someone using a pair of cryptographic keys: one that is public and acts as the identity of the user without giving away any actual information about them, and the other that is the private key which empowers the user to send cryptocurrency.
There are different flavors of blockchain technology which may have various features, but the core concepts are similar. This secure, decentralized record-keeping is what makes the blockchain such a novel and interesting concept.

Blockchain in Healthcare

So, if blockchain is so useful for privacy and information verification, how else can we use it?
If you work in the medical profession—or have had to deal with transferring prescriptions, medical information, or anything else between clinics and doctors—you will know it can sometimes be difficult to reconcile records. Often, health records are not stored on any sort of centralized database that can be accessible to any healthcare provider you see. This can be due to privacy regulations, lack of a consistent filing methodology, or a resistance to adopting electronic record keeping in general.
Blockchain is already being looked at as a possible solution for healthcare record keeping. For example, every time a patient checks in to see a healthcare provider, their visit could be recorded and added to the healthcare blockchain. An API used to update this blockchain could give permissions to different organizations on how much information people or institutions could see about each block. The patient’s full information could be restricted without that patient’s private key. Changes to each visit record could also be closely monitored and reviewed, and new information could be appended to the patient’s block. This would help prevent duplicate or incorrect information from being added. Essentially, a medical care blockchain system could provide solutions to some of the most difficult challenges facing medical record keeping: privacy, consistency, and accuracy.
Blockchain could also be useful in tracking pharmaceuticals as they move from manufacturers to pharmacies to patients. It could even help organize information for customized care that's developed for a patient based on genetics and other specific factors.

Healthcare blockchain infographic courtesy of Deloitte.

Blockchain in Supply Chain Management

Because blockchain is so useful for record keeping, it isn’t surprising that it can also be a powerful tool for supply chain management. IBM is already offering blockchain supply management, including in partnership with Wal-Mart which conducted a pilot test of tracing the origins of produce.
In the pilot, they tracked the movement of food from the farm to the packaging plant, through shipping, onto the shelf, and then eventually to the consumer. The information that can be acquired along the way includes the origin of the food, how it was processed, how it was shipped, and how long it’s been on the shelf for. This would be valuable for manufacturers, engineers, marketers, and consumers alike.
Most importantly, this information would help manage food safety. In the event that there was information indicating some food might not be safe, it would be easy to identify exactly which food items were impacted and track where it had been delivered. It would also make it easier to identify what went wrong in the supply chain to make the food unsafe in the first place.


Blockchain in Cloud Storage

Currently, cloud storage depends on a centralized database or servers which you can sync with to manage your data. Because so much data is now being constantly moved and stored, there has been some interest in figuring out ways to move away from this centralized model. After all, a centralized cloud datacenter can be subject to slow access during pique traffic hours and data can be lost if something happens to the datacenter.
Blockchain cloud storage would use a peer-to-peer, distributed network to encrypt data and store it across nodes. This could speed up access and remove reliance on being able to connect to one single centralized entity.



As you can see, there are many ways that blockchain technology can be used. Even if you don’t believe in the cryptocurrency craze, there are still important features in the underlying blockchain technology that could impact many industries. Any industry that processes data, especially sensitive information that could require security, could see blockchain implemented in the near future. And if the IoT has taught us anything, it's that any industry can evolve to include data collection and processing.
What's your experience with blockchain? Where do you think it will be used in the future?

Quadrature Frequency and Phase Demodulation

$
0
0
This page explores the use of quadrature demodulation with frequency- and phase-modulated signals.
From the previous page we know that quadrature demodulation produces two baseband waveforms that, when taken together, convey the information that was encoded into the carrier of the received signal. More specifically, these I and Q waveforms are equivalent to the real and imaginary parts of a complex number. The baseband waveform contained in the modulated signal corresponds to a magnitude-plus-phase representation of the original data, and quadrature demodulation converts that magnitude-plus-phase representation into I and Q signals that correspond to a Cartesian representation.



It is perhaps not very surprising that we can use quadrature demodulation to demodulate AM signals, considering that a quadrature demodulator is simply two amplitude demodulators driven by carrier-frequency reference signals that have a 90° phase difference. However, one of the most important characteristics of quadrature demodulation is its universality. It works not only with amplitude modulation but also with frequency and phase modulation.

Quadrature Frequency Demodulation

First let’s look at the I and Q waveforms that are produced when we apply quadrature demodulation to frequency modulation. The received FM waveform is a 100 kHz carrier modulated by a 100 Hz sinusoid. We’re using the same quadrature demodulator that was used in the AM simulation; it has two arbitrary behavioral voltage sources for performing the multiplication, and each voltage source is followed by a two-pole low-pass filter (the cutoff frequency is ~1 kHz). You can refer to the page on How to Demodulate an FM Waveform for information on how to create an FM signal in LTspice.



Perhaps the common reaction to this plot would be confusion. What do these odd-looking signals have to do with the constant-frequency sinusoid that should result from the demodulation process? First let’s make two observations:
  • Clearly, the frequency of the I and Q signals is not constant. You may find this a bit confusing at first, since we know that I/Q modulation involves the amplitude modulation of quadrature carriers. Why is the frequency changing as well? It’s essential to remember that these I/Q signals correspond to the modulating signals, not to the quadrature sinusoids that would be added together in a quadrature modulator. The frequency of the modulated quadrature carriers does not change, but the baseband waveforms that serve as the amplitude-modulating signals do not necessarily have constant frequency.
  • Though we cannot intuitively interpret the information in this plot, we can see that the signals exhibit periodic variations and that these variations correspond to the period (=10 ms) of the 100 Hz baseband signal.

Finding the Angle

Now that we have I/Q signals, we need to somehow process them into a normal demodulated waveform. Let’s first try the approach that we used with amplitude modulation: use a bit of math to extract the magnitude data.



Clearly this didn’t work: the magnitude signal (the red trace) doesn’t look like a sinusoid, and the frequency is incorrect (200 Hz instead of 100 Hz). After further consideration, though, this is not surprising. The original data is characterized by magnitude and phase; when we apply the √(I2 + Q2) computation, we are extracting the magnitude. The trouble is, the original data was not encoded in the magnitude of the carrier—it was encoded in the angle (remember that frequency modulation and phase modulation are two forms of angle modulation).
So let’s try a different computation. Let’s extract the angle of the I/Q data rather than the magnitude. As shown in the right-triangle diagram above, we can do this by applying the following equation:



Here is the result:



This doesn’t look good, but we are actually getting close. The red trace represents the instantaneous phase of the original data. (Note that the trace seems more erratic than it really is because the angle is jumping from –90° to +90°, or vice versa). Frequency modulation, though based on phase, does not encode information directly in the phase of the carrier. Rather, it encodes information in the instantaneous frequency of the carrier, and instantaneous frequency is the derivative of instantaneous phase. So what happens if we take the derivative of the red trace?



As you can see, we have now recovered a waveform that is sinusoidal and has the same frequency as the original baseband signal.

How to Design an Arctangent Circuit

At this point you might be wondering why anyone would want to bother with I/Q demodulation. How in the world would anyone design a circuit that generates an output signal corresponding to the derivative of the arctangent of two input signals? Well, to answer the question posed in the title of this section, you digitize the signals and compute the arctangent in firmware or software. And this brings us to an important point: Quadrature demodulation is especially advantageous in the context of software-defined radios.
A software-defined radio (SDR) is a wireless communication system in which significant portions of the transmitter and/or receiver functionality are implemented via software. Quadrature demodulation is highly versatile and enables a single receiver to almost instantaneously adapt to different types of modulation. The I/Q output signals, however, are far less straightforward than a normal baseband signal produced by standard demodulator topologies. This is why a quadrature demodulator and a digital signal processor form such a high-performance receiver system: the digital signal processor can readily apply complicated mathematical operations to the I/Q data produced by the demodulator.

Quadrature Phase Demodulation

The same general considerations that we discussed in the context of quadrature frequency demodulation apply also to quadrature phase demodulation. However, to recover the original data we take the arctangent of (Q/I) rather than the derivative of the arctangent of (Q/I), because the baseband signal is encoded directly in the carrier’s phase rather than in the derivative of the phase (i.e., the frequency).
The following plot was generated by applying quadrature demodulation to a phase-shift-keying waveform consisting of a 100 kHz carrier and a 100 Hz digital baseband signal that causes the carrier’s phase to change by 180° according to whether the signal is logic high or logic low. As you can see, the red trace (whose value corresponds to the phase of the received waveform) reproduces the logic transitions in the baseband signal.



Notice that the red trace is computed via the “atan2” function. Standard arctangent is limited to two quadrants (i.e., 180°) of the Cartesian plane. The atan2 function looks at the individual polarities of the input values in order to produce angles covering all four quadrants.

Summary

  • Quadrature demodulation can extract angle information that is relevant to both frequency modulation and phase modulation.
  • Radio systems can use a digital signal processor (in conjunction with an analog-to-digital converter) to apply mathematical analysis to I/Q waveforms.
  • Baseband phase can be obtained by taking the arctangent of the ratio of Q to I; an “atan2” function is needed if the system must be able to reproduce the full 360° of phase.
  • Baseband frequency can be obtained by taking the derivative of the arctangent of the ratio of Q to I.

LED Control for Automotive Applications: A 3-Channel Constant-Current Linear LED Controller from TI

$
0
0
Texas Instruments recently introduced their new 3-channel high-side constant-current linear automotive LED controller that gives designers more lighting design flexibility.
Texas Instruments (TI) has introduced "the first" (to use the words from their press release) automotive 3-channel high-side linear LED controller without internal MOSFETs.
The TPS92830-Q1 is touted as a controller that requires the use of external MOSFETs, as opposed to the internal MOSFETs integrated into conventional LED drivers. Though it is often advantageous (or at least more convenient) to use ICs that require fewer external components, in this case, TI believes that the use of external MOSFETs ensures that engineers gain greater flexibility in their automotive lighting designs.


Figure 1. Simplified schematic. Image taken from the datasheet (PDF).

Why Use Linear Regulators?

According to the datasheet's description, in an effort to achieve better lighting homogeneity in front and rear automobile lamps, high-current LEDs are used together with LED lighting diffusers. As a side note: Is it just me or are some (many) of the LED brake lights in new cars about as bright as the sun?! I'm not picking on TI here; I’m just making a general comment regarding automotive LEDs. I wonder if there are federal guidelines/requirements for controlling how bright a car's headlights/tail lights/brake lights can or must be?
Anyways, the use of linear constant-current regulators helps car manufacturers to meet strict EMC and reliability requirements. Apparently, however, the challenge with this approach is being able to deliver high-current when using integrated power MOSFETs. Enter TI's TPS92830-Q1.

Protections and Features

This IC seems to have plenty of features:
  • Ambient operating temperature range: -40°C to 125°C. Although impressive, this wide ambient operating temperature range is not uncommon for automotive ICs. In fact, this range is normal, and anything less would not be considered automotive grade. Only military-grade ICs (-55°C to 125°C) have a wider ambient operating temperature range.
  • LED Short-to-GND and Open-Circuit Detection: Both the short-to-GND and open-circuit detection features are channel-independent. Once an open-circuit or short-to-GND condition is detected, the TPS92830-Q1 disables the faulty channel and enters an automatic retry mode. When, and if, the auto-retry mechanism determines that the faulty condition is resolved, the IC resumes normal operation.


Figure 2. LED short-to-GND and open-circuit scenarios. Image taken from the datasheet (PDF).

  • LED dimming options: The IC's PWM functionality allows for three methods of dimming the LEDs:
    1. Internally generated PWM: The device has an on-board PWM generator that supports synchronization between various ICs. In other words, the IC be connected as a master or as a slave.
    2. Externally generated PWM: Each of the three LED channels can be individually controlled via the three PWM input pins (PWM1, PWM2, PWM3).
    3. Power supply dimming: This occurs when the entire LED driver itself is dimmed by applying PWM to the supply voltage.

Application Hints

TI has provided multiple application and implementation guidelines. In one example, TI offers hints for a typical application circuit for automotive external lighting. This includes a schematic (see image below), design requirements, and a detailed design procedure, including fixed parameters and component values.


Figure 4. Schematic for an automotive exterior lighting application, taken from the datasheet (PDF).

TI also provides layout guidelines (see image below) that include the following: use 2 oz copper PCBs in order to effectively dissipate the heat generated from the MOSFETs and LEDs; and place capacitors as close to the associated pins as possible.


Figure 5. Recommended layout. Image taken from the datasheet (PDF).

One thing that they don't offer and which would be really handy—at least as a starting point—is recommendations for possible MOSFETs to use; yes, I mean manufacturer names and part numbers. Perhaps I could request this information via phone or e-mail, but it would be more convenient to find it right there in the datasheet.
Have you had a chance to use this new 3-channel high-current LED controller? If so, leave a comment and tell us about your experiences.


Featured image created from Texas Instruments collateral.

Altium Releases Designer 18 PCB Design Software

$
0
0
Just in time for the new year, Altium LLC has announced the newest release of the company’s flagship PCB design software—Altium Designer 18.
What differentiates this release from its predecessors are several improvements in performance, user interface, and design tools to make the process of creating, managing, and manufacturing complex PCB designs as seamless as possible. To identify areas of improvement, the company used feedback from the Altium user community as well as its own research and development efforts.
Here is a feature overview of Altium Designer 18.

Ability to Handle Larger Designs

Altium Designer 18, unlike its predecessors, now takes advantage of 64-bit architecture with improved code for multi-threaded execution. This gives Altium Designer greater access to computer memory to handle large designs and better algorithm execution to make common tasks faster and more efficient. Things like generating Gerber files, design rule checking, and switching from 2D to 3D should feel quicker and more efficient as a result. This also helps those designing multiboard projects, which can eat up a lot of memory.
Between Altium Designer 16 and Altium Designer 18, the following performance benchmarks were provided on a 4-layered project with 39.6k tracks, 1925 components, 1267 nets, and 369 polygons:
  • Polygon Repour - 5.0x faster (12:56 mins vs 2:36 mins)
  • Gerber File Generation - 155.6x faster (2:33 hours vs 0:59 minutes)
  • Online DRC - 5.6x faster (32:30 mins vs 5:46 mins)
  • File Opening with Scene Building - 6.7x faster (9:36 mins vs 1:26 mins)
  • Project Compilation Time - 3.38x faster (0:54 mins vs 0:16 secs)
Speeding up common tasks in Altium Designer 18 makes PCB designing a smoother experience, and can help get designs to manufacturers or clients faster.

Usability Changes for Menus

A demo of Altium Designer 18 featuring a new UI. Image courtesy of Altium LLC.

Menus in Altium Designer have been reconfigured to make the workflow smoother. Commands and menus deemed “low usage” were removed, though there isn't much information available to suggest how they specifically defined what that term entailed.
Added menus and commands were:
  • New properties panel combines the Inspector Panel and properties dialog displays important information about the PCB design in one place. 
  • Global search function for quick access to commands, information, or design objects (such as component libraries). 
  • New layer and control panel also gives more control over layers and masks, as well as provides filtering options to make focusing on layers of interest easier. 
  • New active bar will provide a place where most frequently used commands can be quickly accessed or customized to the users needs.

Interconnected Multi-board Assembly, ActiveRoute, and PDN Analyzer

Altium Designer 18 also has a new project type: multi-board design project. Previously a feature only available in the most high-end tools, multi-board project management is now available to Altium users for high-density, multi-board projects. Designers can work on multiple boards in one environment, manage connections, synchronize pin swaps across connections, and flag errors in connections or dissimilar net names. Multi-board projects also provide designers the opportunity to mechanically model their designs, and check for component alignment and collisions. The outcome is more accurate prototyping, fewer iterations, and better design accuracy.

A 6-sided PCB multi-board project with collision detection. Image courtesy of Altium LLC

PCB routing can be a long and arduous task even for the most seasoned PCB designer--high-speed auto-routing can lessen the burden and help guide the user. With a new and improved ActiveRoute feature, automated routing can be fine-tuned and adjusted using rules-driven length and phase tuning, as well as features meander controls, glossing, and pin-swapping.

Length tuning in ActiveRoute. Image courtesy of Altium LLC

Altium Designer 18 also features PDN Analyzer 2.0 (Power Distribution Network), with a more intuitive and appealing user interface, more powerful features, and improved accuracy. The PDN Analyzer 2.0 can analyze multiple power nets concurrently, provide current and voltage limit checks, and provides detailed reports.

Streamlining the Bill of Materials

Finally, in one of the last steps of PCB design is the creation of the bill of materials. The ActiveBOM feature in Altium Designer 18 connects to information from vendors to provide real-time information on component availability and price so that information can be accessed throughout the design process before final decision making at the end.
Several user requests have also been implemented to make creating the BOM better—among those improvements are persistent Item/Line numbering, and aliasing for parameters/column names.

ActiveBOM screenshot. Image courtesy of Altium LLC.
(Click to enlarge)



Altium Designer 18 is available as a free upgrade to already existing Altium subscribers. Otherwise, a free trial is available through the website to give you a taste before buying it.
If you've had a chance to work with Altium Designer 18, please share your experiences in the comments below.


Featured image courtesy of Altium LLC.

What’s Inside a Bluetooth Headset

$
0
0
In this post we will learn what's inside a Bluetooth headset gadget and also know how to hack it for using it for other useful personalized applications.

The world is going digital at a rapid pace and advanced concepts such as Bluetooth are quickly replacing the other traditional form of technologies.

What's Bluetooth

What's Bluetooth? It's another wireless transmission technology used for exchanging a wide variety of data in a precoded form over short distances via devices that may be compatible to cell phone, smart phones, laptops, PCs, Wi-Fi systems etc.
Basically Bluetooth also incorporates Rf waves but in a digitally coded form, quite unlike to the traditional FM or AM concepts.
It's an advanced and enhanced form of wireless technology that is designed to be able to connect with many compatible devices at a time without encountering synchronization problems or hurdles.
A Bluetooth headset is another related device which is designed to exchange (transmit and receive) data using Bluetooth technology across similar above mentioned compatible devices.
It's a very interesting RF device which could be hacked by an hobbyist in order to make it work for any desired customized application. For example we can use the headset device to make our home theaters systems completely wireless with crystal clear responses, or may be we can use it for controlling a few of the appliances across the rooms in our house or apartment.

Opening a Bluetooth Headset gadget

In order to experiment with a Bluetooth Headset you could probably buy a typical type that's shown below or if you already have one you can use it for the discussed hacking procedures.



To break it open you can use a screw driver as shown in the picture below. However you will need to maintain extreme dexterity and care while operating the gadget making sure you don't damage the internal circuitry.


Once the cover is removed, you would come across another plastic shielding which you can identically remove using the tip of your screw driver.

Once the inner protection shield is peeled of, the actual PCB with various components would pop out from the shell as shown below.


In this position the few important things that would become visible are: two wires running toward a small speaker, two wires towards an in built MIC, an USB connector and an attached battery. See below for the details



 Getting the Assembly Out

For getting the entire assembly out of the box, you could probably go ahead and remove the speaker and the Mic from their respective locations, in order to study them in-depth.

Identifying the MIC

The MIC could be found hidden inside a metallic clipping which could be pulled out with some careful effort.



Once removed.... the MIC, the speaker and the PCB with all the associated components could be studied in details as shown in the following figure:



Another important area we would be interested within the circuit is the USB socket, since its the input which receives all the data, and also the battery for getting well versed regarding what's inside a typical Bluetooth headset.


Identifying the Battery

The battery is a 3.7V Li-ion, 120mAH battery, as may be witnessed in the following image:




OK that's it, now we exactly know all that's inside a Bluetooth headset gear, and it's time to learn a few of the simple hacking techniques that would enable us to use any Bluetooth headset unit for performing the intended operations.

Touch free Faucet Circuit

$
0
0
A very simple touch free faucet circuit or touch free tap circuit can be built using as little as an IC 555 and a few passive components, in order to implement a contact less water supply operation from the attached facet or the tap.

Drinking Water is Precious

Pure drinking water which we normally get in our cities and homes is precious, and we are always advised to conserve drinking water as much as possible by saving unnecessary wastage of water due to carelessness or negligence.
Especially in public places this issue can become quite grim as many irresponsible citizens often forget to close a water tap or partially close it allowing unnecessary wastage of water.
An automatic system that would take care of the above condition could be a welcome change in many such places for ensuring the prevention of unnecessary throwing away of our precious drinking water into drainages.


Designing an Automatic Water Cut-off

We have already seen how the IC 555 can be used as an effective capacitive switch circuit wherein this device is able to sense a nearing human hand and activate its output accordingly. In the present concept we try the same concept to build the proposed touch free faucet circuit.
For higher accuracy and reliability you could also try implementing a specialized precision capacitive proximity sensor circuit for the same, although the installation procedures would remain the same.
The first circuit below shows a 555 IC application, which could be tried for the implementing a non-contact tap faucet design:

Circuit Schematic


image courtesy: elektor electronics
As can be seen in the figure above, the IC 555 is configured as an astable whose pin#2 is used for sensing the proximity or the capacitance of a human hand.
Pin#2 is terminated with a metallic plate (which could be replaced by the faucet body) such that whenever somebody approaches the tap for washing hands, the sensor is triggered activating the connected relay. The relay finally opens the tap valve for releasing water.
However in the above design the relay is supposed to remain activated only for a short duration of time, which means the individual might require to move his hand to and fro frequently if the washing is required to be for a relatively prolonged period.
Another design which is shown below can be executed for the same:

A Improved Faucet Control Schematic



image courtesy: elektor electronics
The above shown proximity detector circuit is a transistor based design and is designed to sense a human hand when brought at a relatively close proximity to the indicated plate.

Circuit Description

The T1, and T2 transistors are rigged quite in the manner like a Darlington pairs forming a high gain detector stage.
The capacitive plate attached with the base of T1 sense the minute potential differences due to the variations in the capacitance of the plate in response to the human hand and conducts some current at its emitter lead, which is picked by T2 and amplified to a greater extent across its collector lead.
This preamplified signal is detected by the FET stages, which further amplify it to a level strong enough to cause the relay to toggle.
Since the proposed touch free faucet design is an electrically activated device, the water control mechanism needs to be implemented through a water valve mechanism, such as a 12V solenoid valve system.
A typical 12V solenoid valve system can be witnessed below:


Integrating a 12V Solenoid

The two leads are supposed to be fed with a switchable 12V supply in order to close and open the water passage through the white plastic pipe. The white plastic pipe needs to be inserted in series with the faucet water transmission line so that the water supply from the faucet is appropriately controlled via the above discussed operations.
The basic connection details of the above mechanism in conjunction with the electronic circuit and the faucet can be seen below, the user can feel free customize the same in other ways as per his or her preference.



Note: If the faucet body does not respond to the hand proximity, the system could be reinforced with a small additional metal plate in order to increase the surface area of the capacitive sensor and thereby ensure a reliable operation of the touch free faucet.

Designing for the IoT: Microcontroller-Free Smart-Passive-Sensors​ ​(SPS)

$
0
0
Designing for the IoT comes with a unique set of challenges that require unique hardware solutions to overcome. Learn how smart passive sensors (SPSs)—sensors that function without a battery and without a microcontroller—are an example of IoT-specific design.
To learn more about IoT-specific design challenges and solutions, check out the accompanying on-demand webinar: Sensors, Power, and the Internet of Things: How Big Data is Influencing How We Design

Sensors and sensor networks are an integral part of the IoT ecosystem. Many markets are already using, or will soon be using IoT devices. You can currently find IoT-connected devices and systems in many industries:
  • Automotive
  • Transportation and marine
  • Home automation and security
  • Logistics
  • Inventory and supply chain
  • Agricultural and livestock
  • Industrial, construction, and power
  • Medical and personal healthcare



In each industry, the needs are different, but the benefits are similar. Among the benefits of utilizing IoT-connectivity are cost savings, process optimization, yield enhancement, analytics, and data storage.
But designing for the IoT can be a challenge because devices need to have reliable connectivity, low-power requirements, and often a small form factor.

Smart ​Passive ​Sensors ​(SPS)

ON Semiconductor has introduced the world’s first wireless sensor—using standard protocol—that is battery-free and microcontroller-free. Let's take a look at how smart passive sensors are designed to fulfill the needs of IoT design.



Features ​and ​Capabilities ​of ​Smart ​Passive Sensors

Due to their unique inherent features (e.g., battery-free, wireless, ultra-thin, and low cost to scale) these sensors allow for new sensing capabilities. One of the most important of these capabilities is dynamically sensing data in challenging applications.
For example, an SPS would be suitable for:
  • Hard-to-access applications: SPSs require less maintenance and can be placed in areas such as underground, inside walls, or areas that are toxic or pose health dangers and/or hazards.
  • Space-constrained applications: Smaller-sized sensors are crucial to fit in tighter spaces, such as within doorways, in RFID tags, and in wearables (e.g., bandages)
  • Using multiple sensors for cost-effectiveness: Cost-effective sensors allow for more data-gathering, multiple data points, and scalability.

Practical ​Applications

High-Power ​Switchgear ​Equipment

To prevent catastrophic failures inside high-power switchgear boxes, it’s essential to identify high - resistance points inside the equipment. And because such points can be found through temperature monitoring, the traditional method is to manually monitor these high-resistance points during scheduled maintenance intervals. This manual process involves extensive use of labor and yields limited data points—perhaps only one data point every year. SPSs can wirelessly, and continuously, monitor and analyze temperatures on busbars, circuit breaker contacts, and cable connections.

Smart ​Healthcare

Nurses are experiencing an increase in workload. This, in turn, makes it more challenging for them to effectively monitor each patient's status. SPS devices allow nurses to monitor the status of their patients by sending alerts for:
  • Patient is out of bed
  • Temperature change
  • IV bag is empty
  • Catheter bag is full
  • Bed liner needs changing
     


An SPS that is continuously monitoring can provide early detection with faster resolution—and, because they are wireless, the sensors will not impede patient comfort.

Server ​Racks

As data centers become larger and larger, the difficulty and cost of monitoring their equipment increases, as well. Some of the major maintenance issues associated with server racks are energy usage and temperature.
A full turnkey solution includes a network of SPSs, as well as reader hardware and software. SPS temperature sensors can monitor—completely wirelessly—the air inlet temperatures inside server racks, helping to optimize cooling efforts. This saves energy and reduces costs. Additionally, these wireless sensors can provide a means for early detection of equipment failure and can also help track assets, thereby lowering labor costs.

Digital ​Farming

Sensors are used in animal husbandry for identifying specific livestock and for monitoring temperatures. Animal identification can be used to regulate feeding schedules and for tracking various factors such as milk production indicators. An animal’s temperature can be used for early detection of illness or for detecting ovulation.
SPS wireless, battery-free, and maintenance-free sensors offer improved accuracy, combine animal identification and temperature sensing into one device, and can be placed either on an animal’s skin or injected beneath it.

Cold ​Chain/Logistics

SPS sensors can be used to monitor the temperatures of food and/or pharmaceuticals during shipment. The continuous temperature monitoring of these goods allows for immediate detection of failures, thus allowing shipments to be rejected before being unloaded—long before any defects can affect customers.

Summary

The IoT enables professionals across many industries to get more information extremely quickly. Despite that the applications an IoT device might be used in can be vastly different, the actual device requirements are often similar:
  • Reliable connectivity for continuous monitoring
  • Low maintenance
  • Small size
  • Low power needs
ON Semiconductor’s Smart Passive Sensors (SPS) are the world’s first battery-free and microcontroller-free wireless sensors. Their unique features make them ideal for applications including hard-to-access and space-constrained areas. Full turnkey development solutions are available which include the SPS devices, reader hardware, and software.
Learn more about devices designed for the IoT in ON Semiconductor's free webcast, now available on-demand: Sensors, Power, and the Internet of Things: How Big Data is Influencing How We Design

Drones on Mars? NASA Projects May Soon Use Drones for Space Exploration

$
0
0
Drone technology has found its way into a variety of applications ranging from recreational use, photography, security, climate monitoring, and even humanitarian aid. Another domain that we may soon see drones being used in: space exploration.

NASA's JPL (Jet Propulsion Laboratory) recently announced testing of what they've termed a Mars Helicopter Scout (MHS). The scout may be included on the upcoming Mars 2020 mission, a collaborative project led by NASA with a primary mission of determining if life once existed on Mars. The idea is that a helicopter-style drone could help provide better mapping and guidance that will give mission controllers more information to help with path planning and hazard avoidance, as well as identifying points of interest.

How else could we eventually see drone technology used in space exploration? Here's a look at the MHS, other NASA drones, and what kind of challenges engineers face when trying to design a space-ready drone.

Why Send a Drone to Space?

The Mars Helicopter Scout is a payload intended to be part of the Mars 2020 mission. One of its duties, beyond scouting points of interest and potential hazards (say, storms), is to help plan travel routes for the main rover. Despite how this technology could help advance Mars exploration, this use of the MHS would mainly be a demonstration since the drone would have severely limited flight capabilities. However, this proof of concept use is important because adoption of helicopters and drones into space exploration could greatly help achieve operational objectives.
Some of the known specs of the MHS:
Weight: 2.2 lbs
Blade span (co-axial): 3.6 ft
Dimensions of Chasis: 5.5 in x 5.5 in x 5.5 in
Power: 220 W
The MHS is expected to have a range of just under 657 yards and a maximum flight altitude of 130 feet. It will carry a high resolution, downward facing camera and be designed to be able to land on the Martian surface with shock absorbing feet. The helicopter would get about three minutes of flight time every Sol (a Martian day that's equivalent to one Earth day plus 40 minutes). It would use autonomous control and communicate with the rover directly.


In 2016, NASA determined that an additional $15 million in funding would be required to keep the progress of the MHS on track. As recently as February 2017, the MHS was potentially on the list for exclusion from the Mars 2020 mission as there were concerns that the project may go over its mass budget.
So far, the project still seems to be in the running for going to Mars: NASA's Mars Institute is reported to be conducting UAV tests in the Canadian Arctic at the Haughton-Mars Project Research Station this fall. The site, Devon Island, is sometimes called "Mars on Earth" and can help determine whether the devices can withstand Martian-esque conditions.

The Horizon of Space Drones

The MHS isn’t the only drone project in the works, however. There is also research going into prospecting drones that may eventually be used for space mining, multi-planetary colonization, as well as path planning/hazard avoidance.
One such project is "Extreme Access Flyers" which look much closer to the typical quadcopter style drone often used here on Earth. The Swamp Works Laboratory has been working on drones ranging from five feet in diameter, to ones that are small enough to fit in your palm.
They hope the drones can eventually be used for everything from imaging to sample collection, though there is particular interest in resource gathering.
The lab has produced multiple prototypes over the years. One of the major differentiators of these vehicles from our earthly drones is the lack of rotors. Each is designed to utilize whatever gas or even water vapor is available to propel itself, depending on whether it's located on Mars or an asteroid.


Space Environment Challenges

There are certainly unique considerations to take into account when designing a space exploration drone. Particularly relevant to space drone development is the fact that the atmosphere on other planets or celestial objects can be much thinner than what's found on Earth—or non-existent. Mars, for example, has 1% of the atmospheric density of Earth. This is important when determining the mass of the drones, since they will not be able to get the lift required to fly if they're too heavy. On the other hand, if they're too light, they might be difficult to control.

Control is also another challenge. Such drones and UAVs would probably need to be fairly autonomous, since real-time control would not be possible like it is on Earth. It takes approximately 20 hours to send 250 megabits of data to Earth, so live video streams are certainly out of the question.

Finally, there are the daunting challenges of battery capacity and charging. There are a few ways that spacecraft can be powered—the Voyager probes, for example, use RTGs or radioisotope thermoelectric generators. But for small drones that aren't currently being recruited for deep-space missions, the most practical method is likely a solar battery power. Trying to find the balance between mass, battery capacity, and charging time is another element that will need to be considered.

The MHS is an interesting step for UAVs and drones in space exploration. If it succeeds, we may see more missions using drones.


Feature image courtesy of NASA JPL.

How to Check and Calibrate a Humidity Sensor

$
0
0
How accurate is your humidity sensor? Find out with this project.
Humidity sensors are commonplace, relatively inexpensive, and come in many different varieties. Too often, we check the datasheet, use them with an interface, and (as long as the values “look reasonable”) we accept the results.
In this project, we demonstrate how to go a step further and verify the accuracy of a humidity sensor. We also illustrate a general method for sensor calibration and apply the method to calibrate the results to improve the accuracy of the humidity measurements.


Testing setup used in the project (left to right, Quark D2000 microcontroller board, sensor interface, HIH5030 sensor in a micro-environment).

Project Fundamentals

To check the accuracy of a sensor, obtained values are compared to a reference standard. To check the accuracy of a humidity sensor, we use the “saturated salt” method to produce the standards. Put simply, certain salts (i.e., ionic compounds such as table salt or potassium chloride), when dissolved in an aqueous solution, produce an atmosphere of a known humidity (see reference PDF).
These chemical properties are used to create micro-environments of known relative humidity (RH) percentages (i.e., reference standards), and the sensors are read inside the micro-environment. Specifically, we will make a solution in a sealed jar to preserve the atmosphere and then place the connected sensor in the sealed jar. Subsequently, the sensor is repeatedly read and the values recorded.
By repeating the procedure using several different salts, each producing a different relative humidity, we can develop a profile for the sensor under test. Since we know what the relative humidity is for each micro-environment, we can assess the deviations of our sensor readings from those known values, and thus, evaluate the accuracy of the sensor.
If the deviations are substantial, but not insurmountable, we can apply mathematical calibration procedures in software to increase the accuracy of the measurements.

A Word about Safety

Before going further, it is essential that you handle the chemicals used in this project responsibly.

  • Read the safety data sheet (SDS, or sometimes MSDS (material safety data sheet)) for each of the chemicals used (links for the SDS for each salt used are provided below, and you can also conduct literature searches on each salt and their safe handling procedures).
  • Do not inhale or ingest the chemicals.
  • Do not let the chemicals contact your skin or eyes (use gloves and goggles).
  • Do not prepare the solutions in the same area that food is prepared.
  • Properly store the chemicals.
  • Properly discard the solutions and all of the instruments used to prepare the solutions so that exposures do not take place accidentally.
  • Before starting, know what to do if an accidental exposure takes place (see the safety datasheet).

Salts Used

In general, the more RH atmospheres that you can produce for reference standards, the better the characterization of the sensor under test will be. There is, however, always a limit on resources in a practical sense. In this project, four reference standards were used and the salts used to produce the reference standards were chosen to cover a range of possible RH values, but also with consideration to safety, availability, and cost.
The salts below were chosen. In the case of sodium chloride (table salt), pure kosher salt was obtained cheaply at a local grocery store. If you go that route, avoid using table salt with additives, such as iodine or anti-caking agents.

Salts Used in the Project
Salt% RH (at 25°C)SourceSafety Data Sheet
Lithium Chloride11.30Home Science ToolsSDS for LiCl
Magnesium Chloride32.78Home Science ToolsSDS for MgCl
Sodium Chloride75.29Various (see text)SDS for NaCl
Potassium Chloride84.34Home Science ToolsSDS for KCl

Creating a Micro-Environment

We have standards for nearly everything and there is even one for creating a stable RH from an aqueous solution (see ASTM E104 - 02(2012)).  While my bench, and probably yours, is not an official testing laboratory, it is worthwhile to follow the specifications in the standard as closely as you can.
Note also that the results presented in this project, while collected with care, should not be construed as reflecting or indicating an overall quality statement of the accuracy of any brand of sensor. Only a small number of sensors were tested and those used had different ages and different usage histories.
For each salt, a slushy mixture was created by adding distilled water to a consistency similar to very wet sand. Four or five tablespoons of chemical and one tablespoon of distilled water can be tried, but you may have to do a little experimenting.
The mixture was made in a small jar with a tight seal. Glass or even plastic should work well, so long as it can keep the atmosphere inside. A small hole can be made in the top of the jar to run connecting wires to the sensor interface and then to a microcontroller. The connected sensor is then positioned approximately 0.5-1.0 inch above the mixture. Take care that the sensor never directly contacts the solution or it will likely be damaged. To hold the connection in place and to seal the hole in the cap, some easily removable contact putty can be used.
It is important that you allow plenty of time for equilibration before you take the final reading. I tested this issue empirically, taking readings every minute for up to six hours in selected test cases. In my experience, this was longer than needed and I settled on 90-120 minutes equilibration time for each sensor and salt. Then an average of the last five readings were used for the final value. For all cases, the five values showed very little, if any, difference.
Additionally, all readings were taken at about 25° C (± 1°) ambient temperature, and the RH value used for each standard was that listed for 25° C (see this PDF for the values).

Sensor and Micro-environment
HIH5030 sensor on a carrier board inside a micro-environment containing sodium chloride.

Hardware

Microcontroller

In this project, we interface the sensors using a Quark D2000 microcontroller. The D2000 is a 3V board with I2C and analog-to-digital interfaces. For more information on the D2000, the reader is directed to these previous AAC articles:
Keep in mind, though, that most any other microcontroller with the appropriate interfaces can be used.

Sensor Interfaces



Sensors tested in the project; A) HIH8121, B) HIH5030, C) DHT-22 (AM2302), D) HIH6030 (on a carrier board).

Four different types of humidity sensors were tested: DHT-22 (two were used), HIH5030, HIH6030, and HIH8121.The schematics below illustrate the simple interfaces used for each type of sensor, and consultation with the linked data sheets will provide background information for the circuits.
  • The DHT-22 is a temperature and humidity sensor with a proprietary serial output (see this AAC article for more information on the serial protocol used).
  • The HIH5030 is a humidity sensor with analog (voltage) output. The interface for this sensor uses an op-amp in a unity-gain configuration for impedance matching.
  • The HIH6030 and HIH8121 are temperature and humidity sensors that use the I2C protocol (see this AAC article for more information on the I2C communication procedures used).


DHT-22 to D2000 interface.

DHT-22 BOM: U1, DHT-22 sensor; R1, 4.7kΩ resistor; C1, 0.1 µF capacitor.


HIH5030 to D2000 interface.

HIH5030 BOM: U1, HIH3050 sensor; U2, MCP601P op-amp; C1, 1.0 µF capacitor; C2, 0.1 µF capacitor.


HIH6030 to D2000 interface.

HIH6030 BOM: U1, HIH6030 sensor; R1 and R2, 2.2 kΩ resistor; C1, 0.22 µF capacitor; C2, 0.1 µF capacitor.


HIH8121 to D2000 interface.

HIH8121 BOM: U1, HIH8121 sensor; R1 and R2, 2.2 kΩ resistor; C1, 0.22 µF capacitor.

Sensor Software

All of the programs for gathering sensor data are written in the C language and can be downloaded by clicking on the “Humidity Sensor Project Code” button. Each is commented and straightforward. For each sensor, the program simply reads the sensor every minute and sends the value to a serial monitor. As such, they should be easy to adapt to your particular application.


Screenshots of output from DHT22.c (left) and HIH5030.c (right).

Program files for the project can be downloaded by clicking the link below.


 

Sensor Evaluation Procedure

The table below contains the data from evaluating the sensors in each of the four micro-environments.
Percent RH for the Test Sensors (OBS = observed value, ERR = error as difference from standard, RMSE = root mean squared error)
 DHT #1DHT #2HIH5030HIH6030HIH8121
Reference RHOBSERROBSERROBSERROBSERROBSERR
11.30 (LiCl)12.561.2616.294.9913.021.7220.799.4912.311.01
32.78 (MgCl)32.36-0.4233.791.0133.460.6840.777.9932.43-0.35
75.29 (NaCl)73.04-2.2574.50-0.7977.742.4583.838.5476.631.34
84.34 (KCl)82.30-2.0482.15-2.1985.841.5093.439.0985.010.67
           
RMSE 1.657 2.799 1.708 8.796 0.920

Once you have collected the data from the sensor performance in stable environments of known relative humidity, you can numerically evaluate a sensor’s accuracy.
Note that in the table, we calculated the error for each sensor at each RH standard. We can’t, however, simply average those values to evaluate the sensor because some values are positive and other values are negative. If we simply took an average, the resulting value would minimize the average error since the positive and negative values would cancel each other.
Instead, we calculate a root mean square error (RMSE) to characterize the sensor’s accuracy. The formula for RMSE is below:



where O is the observed sensor value and I is the ideal sensor value (i.e., the reference standard). To calculate RMSE, we square each error (the deviation from the reference standard), then calculate the arithmetic average of those values, and finally, take the square root of the average.
Once you have characterized the accuracy of the sensor, you can use the RMSE to decide whether it is necessary to calibrate the sensor. In some cases, the RMSE is small and completely acceptable for your application and you can reasonably decide that no calibration is required.
For example, the results for the HIH8121 are impressive. The RMSE is less than 1% and all sample points have an error less than 2%.
On the other hand, in some cases, you may find that the sensor response is so poor and irregular that you simply decide that another sensor is required for your application.
The decision to calibrate should always take into consideration the degree of accuracy necessary for the task. Nevertheless, we can improve the accuracy of the sensor readings by calibration, for all of the sensors in the table.

Sensor Calibration Procedure

To calibrate a sensor, we need to first mathematically determine the function that relates the ideal values to the observed values. A linear regression procedure can be used to determine that function.
The word “linear” in the name of the regression procedure does not mean a linear function. Instead, the term refers to a linear combination of variables. The resulting function can be linear or curvilinear. All three of the polynomial functions below represent linear regression (note: we are ignoring a 0 degree case which is not useful in this context).
  1. y = ax + b (first degree, linear)
  2. y = ax2 + bx + c (second degree, quadratic)
  3. y = ax3 + bx2 + cx + d (third degree, cubic)
In the current project, we calculate sensor values using four reference standards (i.e., n = 4). Thus, a third-degree polynomial is the highest-degree polynomial that we can calculate. It is always the case that the highest-degree polynomial possible is n - 1, and in this case that means 3 (4 - 1).
Least-squares procedures are ordinarily used for linear regression. In this procedure, a line is fit such that the sum of the distances from each datum to the line is as small as possible. There are many programs available that use least squares procedures to perform linear regression. You can even use Excel (click here for more information).
It should also be noted that we do not have to use linear regression. We could use nonlinear regression. Examples of nonlinear regression result in a power function or a Fourier function. Linear regression, however, is well-suited to our project's data and, further, software correction (calibration) is easily implemented. In fact, in this project, I don’t believe you would gain much of anything by using nonlinear regression.

Choosing the Polynomial

In theory, we want to use the polynomial that best fits the data. That is, the polynomial that produces the smallest coefficient of determination, denoted r2 (or R2, pronounced “R squared”). The closer r2 is to 1, the better the fit. With least squares estimation, it is always the case that the higher the degree of polynomial used, the better the fit.
You do not, however, have to automatically use the highest-degree polynomial possible. Since calibration will take place in software, there may be cases in which the use of a lower-degree polynomial represents a speed and/or memory advantage, especially if the accuracy to be gained by using a higher-degree polynomial is very small.
Below, we demonstrate the calibration procedures for the HIH6030 sensor using polynomials of different degree, and in so doing we will illustrate the general procedure which is applicable to any degree of polynomial that you choose to use.
Using the data from the previous table, we first perform the least squares regression procedure to determine the coefficients for each polynomial. Those values will come from the regression software package used. The results are below, including the r2 values.
  1. Linear: y = ax + b; a = 1.0022287, b = -8.9105659, r2 = 0.9996498
  2. Quadratic: y = ax2 + bx + c; a = -0.0012638, b = 1.1484601, c = -12.0009745, r2 = 0.9999944
  3. Cubic: y = ax3 + bx2 + cx + d; a = 0.0000076, b = -2.4906103, c = 1.2061971, d = -12.7681425, r2 = 0.9999999
The observed values can now be modified using the calculated functions. That is, the sensor readings can be calibrated as illustrated in the table below (note that OBS, Corrected, and ERR values are rounded to two decimal places).

HIH6030 Observed and Calibrated Values Using Polynomials
 RAW 1st Degree 2nd Degree 3rd Degree 
Ref RHOBSERRCorrectedERRCorrectedERRCorrectedERR
11.3020.799.4911.930.6311.360.0611.300.00
32.7840.777.9931.95-0.8332.830.0532.780.00
75.2983.838.5475.11-0.1875.850.5575.290.00
84.3493.439.0984.730.3984.830.4984.340.00
         
RMSE 8.795736 0.562146 0.371478 0.00212

It can be seen that all three of the polynomials produced a significant decrease in the RMSE, compared to the observed measures, and that is why you calibrate. The graph below illustrates the improvement using the 1st degree polynomial. Note how the calibrated (corrected) data points now lie near the ideal diagonal.


Scatter plot of output from an HIH6030 sensor.

 

Calibration in Software

Once we have run the calibrations and chosen the polynomial, we can modify the software to incorporate the corrections into the sensor data. Using the HIH6030.c example, we can modify the program code as follows:



The initially calculated variable is RH. In the code lines above, we have created three new variables representing calibration (RHCal1, RHCal2, RHCal3). For illustration, we have created new variables using each of the three polynomials, whereas, in practice, you would likely calibrate the sensor value using only the chosen polynomial.

Summary Example

To summarize the steps for checking and calibrating a humidity sensor, a final example is presented using data from the DHT-22 (#2) sensor that was evaluated.
  • The first step is to evaluate the sensor performance using reference standards.
This was done using the salts in micro-environments to produce atmospheres with known RH. The data were collected and appear in a table previously presented. We can characterize sensor accuracy using the RMSE term, which also appears in the table. Based on the results of the first step, we can decide whether to perform sensor calibration. If so, then proceed to the second and third steps.
  • The second step is to perform linear regression to determine a function that relates the ideal RH value (from the standards) to the observed value from the sensor readings.
Here, we chose a cubic polynomial (y = ax3 + bx2 + cx + d, where y is the calibrated value and x is the sensor reading) and determined that the coefficients are a = 0.000091367, b = -0.01452993, c = 1.77623089, and d = -14.17403758.
  • The final step is to modify the sensor values using the polynomial to calculate the calibrated values.
The modifications (calibrations) are implemented in software and translate the sensor readings to their calibrated equivalents.
The graph below illustrates the result of the steps. The plot includes the observed values as well as the calibrated values (for the data collected) that are derived from the calibration polynomial.


Scatter plot of output from a DHT-22 sensor.

Closing Thoughts

In this project we demonstrated a method for evaluating the accuracy of common humidity sensors. That is, by utilizing chemical properties to develop reference standards, we can compare the sensor readings to standard values and independently determine the accuracy of the sensors. Furthermore, we demonstrated a software procedure for calibrating the sensor readings and thereby producing more accurate measures of relative humidity.

Sensor evaluation and calibration are procedures relevant anywhere that sensors are used. In the case of humidity sensors, this project demonstrates relatively inexpensive and easy methods of adding to their value.

VIDEO:ITS PLC Professional Edition


Teardown: Vehicle Code Reader and OBD-II Scanner

$
0
0
In this teardown, we take apart Ancel's vehicle code reader OBD-II scanner, the AD410, to see if we can find anything interesting.
A vehicle code reader is usually a portable device that can help diagnose issues in a vehicle by identifying error codes issued by a car's computer. Most modern readers can be plugged into the OBD-II port—easily accessible in most cars—so many are referred to directly as "OBDII code readers".
If only I had had this tool about three months ago when my truck's service engine light appeared, I could've saved myself a trip to the local auto parts store where the code was read, interpreted, and cleared. This code-reading and code-clearing device looks like a pretty handy tool for any DIY automotive repair enthusiast.
Well, let's take a look under its hood (pun intended)!


The packaging of Ancel's AD410 vehicle code reader, which is both functional and inexpensive

What's Included in the Box

Besides the vehicle code-reading device, itself—along with its integrated cable and OBD-II connector assembly—the code reader comes with a user's manual and a micro-USB-to-USB cable, which allows the user to print diagnostic reports via a computer. Nice! See the image below.


What's included with the AD410 code reader

Disassembly Was a Snap

I was a bit surprised at how easy the code reader's enclosure came apart. Only four Phillips-head screws hold the two plastic enclosure pieces together. However, both plastic pieces feel robust, which is necessary for protecting the device should it accidentally be dropped on a garage floor.
Once the two enclosure halves were separated, the single internal PCB readily lifted out. See the two images below.


The two enclosure pieces are easily separated once the four Phillip screws are removed.


The PCB easily is removed from the enclosure by simply lifting it out.

The LCD Screen

Integrated with the LCD screen, which is supported/protected by an orange plastic cover, is a flex circuit that connects to the PCB via an SMD connector.


The LCD display, which is protected with a plastic cover, has an integrated flex circuit that attaches to the PCB.

Also, part of the LCD screen are four LEDs, each soldered directly to the flex circuit (see the image below) and are used for backlighting the LCD screen.


The LCD screen uses four LED for backlighting

The PCB

It's obvious that a mechanical engineer/designer was involved with the layout of the PCB given its particular shape. The various holes and slots used for securing the board to the enclosure, and the associated PCB keepout areas.
The PCB layout person did an excellent job in laying out this board by 1) keeping all the components on one side of the PCB,and 2) keeping the board as a simple two-layer design (i.e., there's no internal layers). See the image below.


The PCB is a two-layer design—there are no internal layers. The PCB top side (left) and bottom side (right) held up to a light

The top side of the board is where all the components live, and this side of the PCB appears to use the HASL copper plating method, as identified by its tin-lead color.
The bottom side looks to employ either the ENIG or the hard gold copper plating method. These higher-end, more robust, and more expensive plating methods are necessary on the PCB's bottom side due to the high-wear areas of the push buttons. See the images below.


All components are located on the top side of the PCB. Click to enlarge.

  • Crystal: Part marking: 8.000 (no datasheet)
  • Capacitor (qty 4): Part marking: 47 35V RVT
  • NPN Transistor (qty 11): Part marking: 1AM
  • Unknown IC (qty 3): Part marking: 2A (no datasheet)
  • Voltage Regulator: Part marking: 78M05
  • Schottky Rectifier (qty 4): Part marking: SS34
  • Voltage Regulator: Part marking: AMS1117
  • Voltage Comparator: Part marking: ST 393
  • Buzzer: Part marking: HXD (no datasheet)
  • CAN Transceiver:NXP A1050/C
  • Memory (Serial Flash): Part marking: 25Q128FVSG
  • Processor (ARM): Part marking: GD32F103


No components are located on the PCB's bottom side
 

The PCB's top side (left) and bottom side (right) use different copper plating methods

Conclusion

This AD410 vehicle code reader looks to be a simple yet well-designed and well-constructed device. The use of the more robust copper plating on the PCB's bottom side should allow the push buttons to function properly for many years of service, and the two hard plastic enclosure pieces, together with the LCD's plastic cover, should protect the device from almost certain, though hopefully accidental, drops on the garage floor or driveway.

Featured image courtesy of Amazon.

Dev Kits from TDK, Add-On Sensor Systems from Bosch Set the Tone for Sensor Incorporation in 2018

$
0
0
Sensor integration unsurprisingly continues to be important for the future of device design. Bosch and TDK showed this month that they're answering the call with both dev kits for easy prototyping and production-ready sensor solutions.
2018 is starting out with a flurry of new resources for incorporating sensors into designs. In particular, motion sensing and IoT retrofitting are the order of the day, which makes a lot of sense in light of the fact that Bosch demonstrated smart city and automatic parking technologies at CES 2018 this month. TDK, too, stayed on-trend by largely focusing on sensors for automotive applications.
Here's a look at what sensors were on display this month at CES from Bosch and TDK.

TDK SmartMotion Development Kits

TDK displayed their SmartMotion Platform, a series of four sensor development kits that are designed to work with their Windows-based MotionLink software. The evaluation boards include an embedded debugger, a micro USB socket, and one of several sensors to allow engineers to quickly evaluate and understand specific TDK sensor solutions.

TDK Sensor SDKs

All of the development kits are built around the Microchip G55 Cortex-M4 CPU with Floating-Point Unit. With an embedded debugger already on the board, users only need to connect the development kit to their computer and start the MotionLink Gui-based programmer.
Two of the kits include a digital motion processor, which is an algorithm that uses sensor fusion to meld the individual results to create an inertial measurement unit (IMU).
  • DK-20602
    • Development platform for the ICM-20602 6-axis motion sensor (3-axis gyroscope, 3-axis accelerometer)
  • DK-20648
    • Development platform for the ICM-20648 6-axis motion sensor (3-axis gyroscope, 3-axis accelerometer, Digital Motion Processor)
  • DK-20789
    • Development platform for the ICM-20789 7-axis motion sensor (3-axis gyroscope, 3-axis accelerometer, barometric pressure sensor)
  • DK-20948
    • Development platform for the ICM-20948 9-axis motion sensor (3-axis gyroscope, 3-axis accelerometer, 3-axis magnetometer, Digital Motion Processor)

Bosch XDS110 and Bosch XDK Sensor Node

The Bosch XDK kit and XDK Node are designed as prototype- and a production-ready add-on sensor solutions to adapt almost any device to the Internet of Things. Users who hope to integrate the sensors into new circuit board designs would use the XDK kit, while users who want to retrofit a sensor solution into existing products without redesigning the product or adding technology to existing circuit-boards should use the XDK Node.
The Node is an add-on sensor that can be incorporated in new designs or retrofitted to old designs. Imagine attaching it to existing machinery to sense when a part is off-balance, listen for a bearing that is beginning to fail, or sense the angle of inclination of a crane boom. Sensor data is collected by a 32-bit Arm Cortex M3 microcontroller and transmitted via Bluetooth Low Energy or Wireless LAN to a monitoring station.

Bosch XDK Node

The XDK Node includes the following sensors:
  • Three-axis accelerometer:BMA280 (±2...±16g programmable)
  • Three-axis gyroscope:BMG160 (±125 °/s...±2000 °/s programmable)
  • Three-axis magnetometer:BMM150 (±1300 µT x-axis, y-axis, ±2500 µT z-axis)
  • Light sensor: (0.045 lux...188,000 lux: 22-bit)
  • Temperature sensor: BME280 (-20 °C...60 °C)
  • Barometric pressure sensor: BME280 (300 hPa ... 1100 hPa
  • Humidity sensor: BME280 (10%...90% rH)
  • Light sensor: (0.045 lux ... 188,000 lux: 22-bit)
Additionally, it includes three programmable status LEDs, two programmable push-buttons, a micro SD card slot, JTAG debug interface, and interfaces for extension boards.
The sensor is configured with the free XDK Workbench software solution by Bosch. Their IDE provides high- and low-level APIs with algorithm libraries for interpreting data in a variety of use cases.

Screen capture of XDK Workbench Software from Bosch


Summary

Manufacturers are increasingly adding sensors from Bosch, TDK, and other manufacturers to new-product designs to make the IoT a reality, but that integration takes time.
TDK's dev kits allow for quick setup and accessible prototyping. For those manufacturers who want to side-step a year's worth of product development—or retrofit functioning pre-IoT devices—the Bosch XDK Sensor Node is certainly a viable option, and I fully expect similar snap-in solutions to follow.


Featured image courtesy of TDK.

Anatomy of a Security Flaw Announcement: The Strange Timeline of Spectre and Meltdown

$
0
0
Spectre and Meltdown are two major processor vulnerabilities that represent a serious security issue inherent in millions of devices. Here's a look at the unusual way these issues were revealed to the public and what we can expect going forward.
On January 3rd, news broke that Intel’s chips were vulnerable due to two major bugs that had been discovered, dubbed Meltdown and Spectre—vulnerabilities that have evidently been around since the 90s. Since the initial announcement, information has been coming out at rapid fire, sometimes contradictory in nature and other times downright confusing.
At best, what the average person might understand is that these flaws impact a large majority of computing devices ranging from smartphones to data centers. Logical questions tend to follow, directed at those in the tech industry: Are my devices affected? What's at stake? What should I do?
Of course, another question may also crop up: how long have you known about this?
Here's a look at Spectre, Meltdown, and the journey from discovering a security vulnerability to telling the public about it.

A Brief Look at Speculative Execution

To understand the timeline for addressing these vulnerabilities, we'll first need a basic understanding of what they entail.
There has been quite a bit of coverage on the topic already, but the gist is that Meltdown and Spectre are two recently-identified vulnerabilities that arise from the result of “speculative execution”. Speculative execution is a tactic where a system will anticipate some processes based on previous routines. This shortcut makes these processes extremely fast to complete, but it comes at a cost.

Meltdown and Spectre flaws were apparently discovered simultaneously. Image courtesy of Windows Central

The security issue in question occurs when data resulting from speculative execution is re-routed into shared memory. This information is then vulnerable to side-channel attacks and could make otherwise privileged data accessible to malicious attackers.
Speculative execution has been used for roughly 20 years to speed up processing and has become deeply ingrained in processor architecture. It's so entrenched that it could take years more before processors don't need to rely on this method to process at lightning speeds.
So if it will take years to fully address the core vulnerabilities in the processor architecture, does it really matter how and when the public was made aware of them? Let's take a look at how vulnerabilities are typically approached and why.

A Timeline in Disarray

For the most part, when major vulnerabilities or security flaws are publicly announced, vendors are already prepared with updates and patches. While these immediate patches represent the work of hundreds or thousands of quick-thinking engineers, they're hardly due to prescience. When a vulnerability is discovered, the modus operandi has so far been to inform vendors discretely so that they can begin to work on solutions before the news goes public. This standard helps to curtail the chance that the vulnerability will be taken advantage of before patches are developed.
The problem is, this only works if all companies and vendors work together and give each other a chance to properly develop, test, and implement these patches. Leaks can happen as a result of media finding out early, a company accidentally letting the information get out, updates being released too early, or sometimes a combination of these factors.
In the case of Spectre and Meltdown, many vendors weren’t ready with patches. Even worse, one of the vulnerabilities is due to a flaw in the inherent design of Intel’s CPUs—definitely not an issue that can be fixed overnight.
“The tier 1 group [of tech companies] are Google, Amazon, and Intel, and they knew about [the problem] since around June of last year, I believe”, says Marty Puranik, CEO of Atlantic.NET, who spoke to All About Circuits. “But most others found out about it probably just after Christmas." As could be expected, Intel was one of the first companies to announce patches on January 4th.
Puranik, who started his first tech company in his college dorm room in the 90s and has been in the industry for quite a while, explains that the company knew something would be coming up when employees started noticing kernel patches in progress of a specific nature. “That’s how we found out about it, but at the time it wasn’t known it would be two separate bugs. We thought it was going to be one”.
When the vulnerabilities were confirmed, it appeared that the industry would follow the typical script of preparing patches before announcing the problem. But then, somewhere, there was a breakdown.


Even the newest processors released last year are affected by these vulnerabilities. Image from Intel.

“What happened was that there was an embargo [for] when everyone was supposed to come out with patches on January 9th," Puranik explains, "but because this got leaked early, a lot of people who were working on it were still testing and weren’t done with their patches because they thought they had until January 9th to release it."
This lack of preparedness and the wide-scale media coverage most likely contributed to the conflicting information being released on the scope of the problem, what solutions are actually available, and how the updates are going to impact device performance.
For example, Intel originally stated that patching Meltdown and Spectre would only produce mild performance reduction, but in reality processor performance will be expected to suffer significantly, especially in older processors. In some of the worst cases, these performance issues could cause systems to reboot and become unstable.
Even the question of which chips are impacted was not clear initially. Currently, it is known that Meltdown largely impacts Intel chips, while Spectre impacts Intel, AMD, and ARM chips. As a result, Intel has been glaringly in the spotlight, particularly on the subject of transparency. Greg Kroah-Hartman, arguably one of the best-known faces of the Linux Foundation's leadership, notably made restrained but irate comments on "how this was all handled by the companies involved"—a not-so veiled indictment of Linux being left in the dark while other companies had months more time to develop patches.
Whether the release of information was premature by embargo standards or far too late for the industry to catch up to the tier 1 companies, there's still a mess to clean up. What now?


Intel had 79.3% of the CPU marketshare as of July 2017. Image courtesy of Daze Info

The Philosophy of Moving Forward

Given how much processors rely on speculative execution for performance, Spectre and Meltdown are daunting problems. But, given how many unknowns are yet to be addressed, it's difficult to guess what happens next.
History tells us that the immediate next steps are likely to begin in the realm of operating system updates and echo through the next iterations of architecture design. We have a blueprint of this solution because this certainly has not been Intel’s first time managing a large-scale vulnerability.
In 1997, the “F00F” flaw was discovered in which a lock instruction would incorrectly perform bus cycles in locked mode, causing the processor to stop all activity until it was rebooted (AKA "halt and catch fire"). The wide-scale deployment of Intel processors, and the possibility of users losing unsaved data, made the problem significant.
The solution to the F00F flaw? Immediate operating system updates and the B2 stepping update for subsequent Intel processor specifications.

The F00F flaw impacted Intel Pentium chips in 1997. Image courtesy of Tom's Hardware

But, as Greg Kroah-Hartman will tell you, Intel isn't the only company that needs to respond to Spectre and Meltdown. The industry as a whole is at a watershed moment when it comes to security.
So what is a cloud hosting company like Atlantic.NET doing to manage the problem? Puranik says, “we’re in various stages, depending on the operating system. You have to be very methodical and systematic and look at each case and how you are going to handle the problem—you can’t just blanket [a solution] over the entire server farm." He adds that security is the first priority, followed by solutions to restore performance, echoing the F00F blueprint.
“I don’t think it’ll be a one-and-done patch," he says. "I think there will be two or three patches after. Right now, people are just trying to get their patches out, and there will probably be a performance hit because they’re going to be focused on security. I think after a few waves, there will be time to come up with ways to patch that doesn’t impact performance as badly as the first patches. My gut feeling is that we’ll be able to recover some of the performance that’s being lost but we just don’t know how much”.



Whatever the next steps are in dealing with Spectre and Meltdown, one result is likely—our perceptions of security are going to change, as will the way we design processors.
What will chip makers do in the future if they can't rely on speculative execution? Which alternative performance boosters will be researched as a result?
As Marty Puranik sees it, the changes will radiate beyond immediate fixes and redesigning architectures. The way companies do business will have to adjust, as well: “Any company that has a compliance officer is going to have more work and more checkboxes that vendors will have to check off—more demand on what your standardized response to these types of things are. So I think it’s going to create a lot more work. There’s definitely going to be more work.”


Featured image courtesy of Windows Central

Light Sensors: The Absorption Spectrometer, LIDAR, and Thermal Cameras on Display at CES 2018

$
0
0
As sensors continue to flood the consumer-space, the price drops and the functionality improves. This article looks at a spectrophotometer, LIDAR, and infrared camera on display this year at CES 2018.

SCïO by Consumer Physics

The SCïO  by Consumer Physics is a smartphone-linked reflectance microspectrometer with a light source by OSRAM. It scans materials in the environment and compares the reflectance spectra with a database of known spectra to provide positive identification of foods, pills, and anything else in the always-growing ConsumerPhysics database.

Clockwise from upper-left corner: 1)  The SCïO sensor. 2) Scanning breakfast foods for nutritional value. 3) Spectra of various pills. 4) Identification of tomatoes. Image courtesy of Consumerphysics.

Any time light strikes an object, it can be absorbed, reflected, or refracted, and the behavior is dependent on the wavelength of light and the object the light is interacting with. This sensor looks at the intensity of reflected light off of the surface of an object as the wavelength of the light source in the sensor changes. By comparing the sensor spectra to a database of reference samples, the app can positively identify the sample for the user.
The applications for this technology are breathtaking.  Imagine identifying drugs as they are dispensed to ensure the correct one is being given in a hospital or nursing home, determining exactly what the nutritional value of your breakfast is, or knowing that your baby formula isn't laced with melamine.

FLIR One Pro

The human eye is only sensitive to a very narrow portion of the electromagnetic spectrum that is referred to as visible light. 

Image of various wavelength ranges and their associated names from FLIR.

Frequencies that are just below the threshold of human vision are referred to as Infrared, and the FLIR company creates cameras and sensors that detect electromagnetic energy in this region and convert it to visible light through false-color photographs.

Flir Video Wall with two science-grade FLIR cameras and an LED Matrix

FLIR has miniaturized their larger cameras and couples them with the screen and camera in Android and iPhones to provide users a way to affordably sense their environments. This is of particular interest to electrical engineers who have to deal with heat dissipation in their circuits.
You can now look at thermal heat signatures early in the design process and fix issues in-house without having to bring in expensive consultants. The sensors can also be used by homeowners to look for thermal anomalies that would remain otherwise undetectable.

FLIR One Pro. Image courtesy of FLIR.

LiDAR

The PX-80 Mobile 3D Scanner by Occipital (formerly Paracosm) is a hand-held mechanical LIDAR unit designed for mobile, large site survey. While other LiDAR units are capable of creating large-site scans, they often have to be mounted to vehicles and have their pointclouds merged in a post-production process. The PX-80 assembles its point-clouds without relying on GPS, which allows it to be used inside buildings and structures.

PX-80 mobile scanner from Paracosm

Innoviz debuted their solid-state LiDAR at CES 2018. With no moving parts, the unit has a restricted field of view. However, that is offset by the fact that it is a fraction of the cost of a Mechanical LiDAR unit.
Multiple solid-state LiDARs can be incorporated into an automobile to provide 360° coverage for less than the cost of a single current-generation mechanical LIDAR. With a mass-production cost of $100 per unit, these devices will soon find their way to motorcycles and drones.

Innoviz LIDAR specifications from Innoviz.tech

Summary


As sensors continue to decrease in price, they will undoubtedly change the way we interact with our environments. The next few years should see visual sensor flooding the consumer space. Contact the companies discussed above to learn how you can incorporate an optical sensor into your next design.

What Is Mobility as a Service? The Future of Autonomous Vehicles

$
0
0
"Mobility" has been highlighted over and over again at CES this year, a broad concept touted by many as the future of the automotive industry. What is "Mobility as a Service" and what does it mean for autonomous vehicles?
We were first exposed to the term "mobility as a service" amid the pulsating lights and blaring music at Toyota's Monday press briefing at CES. The presentation began with Mr. Toyota stating "My goal is to transition Toyota from an automotive company to a mobility company." This, it seems, is an important shift in what the current generation of Toyota leadership (the third in the Toyota dynasty) sees as the future—and therefore the legacy—of the company.


Mr. Toyota at CES 2018

All this talk of mobility might seem redundant to some. After all, aren't automobiles explicitly in the business of mobility? As far as we can tell, "mobility" in the future has two prongs: the mobility of goods and services, and the mobility of people. This distinction becomes more clear when considering Toyota's big reveal of the day—a conceptual autonomous vehicle called the e-Pallette.


The e-Pallette

The e-Pallette is an autonomous cart- or van-like vehicle covered in display screens. It's designed to serve the two forms of "mobility as a service", the first of which is a customizable mobile store.
Imagine a tiny retail establishment, say a shoe store, that can be beckoned to a particular place. Customers can enter, try on shoes, and complete a sale (which Toyota envisions occurring wirelessly). Other uses include mobile maintenance facilities, mobile healthcare clinics, and mobile fab labs. In this way, the concept of "mobility as a service" can apply to any good or service that a consumer may wish to bring straight to their doorstep.
But what about the other way around? Surely the future of mobility still focuses on the transport of people.
One of the most important applications of the e-Pallette is one that most people are already familiar with: ride-sharing. Toyota's planned fleet of autonomous and fully-electric vehicles is the beginning of what they call "Autono-MaaS" (MaaS being the shortened version of "Mobility as a Service").

Mobile Goods and Services

The portion of Mobility as a Service that brings goods to the consumer is a broad one. It can be represented as mobile stores and restaurants, as Toyota is developing, or merely as autonomous courier services.
Continental Automotive Group, a supplier to automotive OEMs, also demonstrated responsiveness to the concept of "mobility" as the future. Continental's own CES press conference partly focused on what they've dubbed as the BEE (Balanced Economy and Ecology) Mobility Concept.


The BEE concept vehicle

CEO Dr. Elmar Degenhart emphasized Continental's focus on making cities more efficient, safer, and more ecologically responsible. Smoother traffic, cleaner air, less competition for parking spaces, and safer transport are the keystones of their focus on the future of mobility.
Of course, BEE also has applications that can enable the mobility of people. In a September press release on the BEE, Continental said that it intends the BEE to be "part of a swarm of autonomous electric vehicles". So far, it's described as a platform to experiment on what autonomous vehicles are capable of. As an example, the BEE could theoretically join a caravan of other BEE vehicles on a bus-like route, but allow an individual traveler to easily split off to their unique destination.

Waymo, Uber, and the Tide of Ride Sharing

Mobility as a Service is not a new concept, depending on how you look at it. I guess you could say that the wheels are already in motion. (Sorry.) Ride-sharing, in general, has quickly become the norm in most cities, even smaller ones where there may only be a handful of drivers available.
Uber and Lyft have taken full advantage of the "share economy" that leverages the existing vehicles of their contracted drivers. This new focus on ride-sharing for major automotive corporations is a signal that they've decided they want a piece of that pie.
Uber has famously been making strides in LiDAR research (and been entrenched in the resulting LiDAR-related lawsuits) with an eye towards autonomous vehicles, giving mainstream automotive OEMs serious incentive to invest in the technology, themselves.


Models of Uber's autonomous vehicles. Image courtesy of Uber's Advanced Technologies Group.

But here's where these automotive companies are trying to ride the wave. In theory, a fleet of e-Pallette-style vehicles would arguably be relatively easy to introduce to an urban population. Compare that to the prospect of trying to sell individual autonomous vehicles to consumers, fighting the slow turnover cycle of most cars and the still-palpable wariness surrounding autonomous vehicles in general.
Appropriating the share economy mindset is a way of clawing back some ground that automotive companies have lost over the last few years. At present, this takes the form of collaborations. Uber is partnering with Volvo, Volkswagen, and even Toyota, itself, in various capacities to release autonomous vehicle services. These partnerships are likely to continue forming and evolving over time as Uber is unlikely to get into the car-manufacturing game. Whether an automotive OEM could develop a similar platform to Uber's is less certain.

A Tangible Future for Autonomous Cars?


The mentality of "mobility as a service" presents a viable path forward for autonomous vehicles. As autonomous ride-share vehicles become accepted, it's likely to increase public confidence in the idea of driverless cars. More importantly, perhaps, is the opportunity this plan presents in terms of testing the various sensors, power systems, and infrastructure associated with successful (and safe) autonomous vehicles. 
Viewing all 1099 articles
Browse latest View live