Quantcast
Channel: LEKULE
Viewing all 1099 articles
Browse latest View live

The I2C Bus: When to Use an I2C Buffer

$
0
0
This article discusses the use-cases, benefits, and applications of using an I2C buffer.
Of all the serial interfaces used for embedded devices, I2C stands out as my personal favorite. While it may not have the same throughput as other serial communication methods, the ability to control so many devices with only two lines, while having multiple masters, makes I2C an awesome tool for the embedded engineer trying to manage cost, pin count, and complexity.

Sometimes, though, design constraints can complicate an I2C implementation. The I2C buffer is one tool that can make things a little bit easier.

Considering I2C Bus Capacitance

With the 7-bit addressing scheme, a theoretical 128 devices can be connected to an I2C bus. Some of these addresses are reserved, leaving only 112 available. With the new 10-bit addressing scheme, even more devices can be connected. However, every device added to the bus increases the overall bus capacitance, which can be surprisingly high when all the PCB capacitance and device capacitances are added up. In order to comply with the standard, once the 400 pF maximum is reached, no more devices can be placed on the bus. One way of getting around this is to add an I2C buffer to your design. The picture below, taken from this application note (PDF) published by Texas Instruments, shows a typical I2C bus with associated bus capacitance.

typical I2C bus with associated bus capacitance
Figure 1.Typical I2C bus with associated bus capacitance. Diagram courtesy of TI (PDF).

Introducing the I2C Buffer

When operating in Standard or Fast mode, the I2C bus has a maximum bus capacitance of 400 pF. In Fast Mode Plus, this is increased to 500 pF. Once that limit is reached, any more capacitance puts you outside the standard and outside device specifications. This problem can be especially troublesome when devices and pull-up resistors are chosen and specified before the engineer realizes that more devices need to be added.

The I2C buffer divides the I2C bus into two separate buses, while still allowing devices to communicate across it. This effectively cuts your total bus capacitance wherever the buffer is placed, because the separate buses have separate bus capacitances. This means that for the same pull-up resistors, we get a lower RC time constant and thus a shorter rise time. This ability to reduce rise time is one of the main reasons buffers are used.

Buffer inserted into an I2C bus, cutting bus capacitance in half.
Figure 2. Buffer inserted into an I2C bus, cutting bus capacitance in half. Diagram taken from this app note (PDF).

Static Voltage Offset (SVO)

The bidirectional nature of I2C communication means that I2C buffers must employ special techniques to avoid “locking up” the bus. As you can see in the diagram below, if the master pulls the line low, then the slave side gets pulled low. However, the logic low on the slave side also pulls the master's side low. Consequently, when the master tries to release the bus, the slave side is still driving low.

Buffers can lock up the bus if not designed correctly
Figure 3. Buffers can lock up the bus if not designed correctly

One solution to this problem is to use a static voltage offset (SVO). Essentially, a low-voltage Zener diode is used to create an additional threshold voltage on one side of the buffer, such that a logic-low on the SVO side of the buffer can be either a “below the SVO” logic low or an “above the SVO” logic low, depending on whether the logic low was driven by the master side or the slave side. Thus, a controller inside the buffer can determine the origin of the logic low and use this information to prevent lock-up.

Multiple buffers can be used along the bus as a means of managing capacitance. You can’t assume that SVO voltage levels will be identical, even when using the same exact part, and thus multiple buffers must be configured so as to make sure that two SVO voltages are not connected together. The SVO also has to be checked against the input voltage of the slave device, to ensure that it is well below the 30% threshold. Many devices use an SVO of 0.5 V, but the value can range at least from 0.1 V to 0.6 V.

Examples of I2C Buffers

The PC9515A from Texas Instruments is described as a dual bidirectional I2C buffer. It can be used to run a 5 V bus alongside a 3.3 V bus, so it functions as a logic-level translator as well. Here's an application example from the datasheet:

PC9515A running with two different bus voltages
Figure 4. PC9515A running with two different bus voltages

The LTC4311 from Linear Tech/Analog Devices is a different type of I2C buffer. Actually, the part is described not as a buffer but as an “accelerator.” It is connected in parallel with the other devices on the bus, and its internal circuitry compensates for large amounts of bus capacitance by detecting positive signal transitions and then injecting additional current that causes the transition to occur more rapidly. The following diagram shows how the LTC4311 is used.

Diagram taken from the LTC4311 datasheet.
Figure 5. Diagram of how the LTC4311 is used. Taken from the LTC4311 datasheet (PDF).

I2C Buffers in Industrial Applications

In the industrial automation world, electronic systems are often controlling highly sensitive processes that can't be allowed to fail. Mission critical applications require both the hardware and software to work no matter what. This had led to huge investments in redundant systems and controllers.
Another application of I2C buffers is creating a redundant bus. Two buffers, both running back to the same master, can be used to provide a failsafe in case one bus locks up or is compromised. A redundant I2C bus with redundant devices can be used to mitigate the risk of bus or device failure. Below is an example of what this type of system might look like.

example system I2C bus with redundant devices
Figure 6. An example of an I2C bus system with redundant devices.

Essentially, the master can use the EN pin to control which bus is currently being communicated on and switch to the inactive bus in case of a failure. The secondary bus could be periodically checked to make sure it's still ready to take over if the primary fails. Setups like this add robustness to a design and can be a huge benefit when reliability is a prime concern.

This redundant-bus configuration creates two independent groups of slave devices. As a result, it can also be useful when one master needs to communicate with slaves that operate in different modes (e.g., some of the slaves use Fast Mode and the others are in Standard Mode).

Conclusion

This article introduced the I2C buffer and its applications. It can be used to reduce bus capacitance, control rise times, add additional devices to the bus, interface devices operating at different voltages, or even to implement a redundant bus. Understanding how a buffer works and when to use it provides another tool for the design engineer's toolbox.

Great designs are a mixture of experience, creativity, and a mastery of the fundamentals. I hope that your understanding of I2C is now a little bit better and that the design challenges that come with it seem a little less daunting.

FPGA Design Software: An Overview of Time-to-Integration Features in Xilinx’s Vivado Design Suite

$
0
0
This article will look at some of the most important features of the Xilinx Vivado Design Suite which accelerates the "time to integration" of the design procedure.

Traditional FPGA design mainly focuses on the concept of programmable logic and I/O. However, today’s complex applications require going beyond programmable logic to “programmable systems”. This is one of the most fundamental ideas behind the creation of the Xilinx Vivado Design Suite.
Vivado is an IP- and system-centric design environment which attempts to simplify integration of soft IPs. This is achieved through several features that will be briefly discussed in the rest of the article.
Note that the features covered in this article mainly aim to accelerate the “time to integration” of the design process. In another article, we’ll discuss those features of the Vivado that intend to accelerate the “time to implementation” of the design.

Extending the Vivado IP Repository

Vivado has an extensible IP catalog that can include Xilinx and third-party IPs. Vivado enables engineers to quickly turn a part of their design or algorithm into a reusable IP added to the Vivado IP catalog.

As illustrated in Figure 1, with the Vivado IP Packager, all the associated files of the design, such as constraints, test benches and documentation, can be added to the created IP.


Figure 1. Vivado IP creation flow. Image courtesy of Xilinx.

One of the important features of the Vivado IP flow is the ability to create an IP at any level of a design, no matter if it's a Register-Transfer Level (RTL) design, a netlist, a placed netlist, or even a placed-and-routed netlist.

Also note that, as shown in Figure 1, the source files of the IP Packager can include MATLAB/Simulink algorithms from Xilinx System Generator or C/C++/SystemC algorithms from the Vivado High-Level Synthesis (HLS). In these cases, the proprietary IP generation is further accelerated because a higher level of description is utilized to develop the target algorithm. The HLS will be briefly discussed in the next section.

C-Based IP Generation with Vivado HLS

Development of today’s advanced algorithms is not straightforward, even for the most experienced RTL teams. That’s why tools such as Vivado HLS, that can receive C/C++/SystemC algorithms and extract VHDL or Verilog code, can significantly accelerate IP development and, consequently, the design process.

Vivado HLS allows the engineer to explore the design space and find several different implementations for the same source code as shown in Figure 2 below.


Figure 2. Exploring the design space with HLS. Image courtesy of Xilinx.

As you can see, the tool can optimize several design parameters such as area, latency, and throughput.
It's also possible that an engineer may use the HLS to find a solution better than that of the hand-coded RTL. Figure 3 below compares the results of RTL approach with that of HLS for a radar design example.


Figure 3. Comparison of RTL approach with HLS. Image courtesy of Xilinx.

To achieve this, the HLS actually takes into account the properties of the target device such as the available DSP48 slices, memory and SRL blocks. It also tries to efficiently implement the floating-point algorithms and automatically extract parallelism at different levels.

High-level programming languages, such as C, can be extremely helpful in the algorithm verification stage, too. The designer can rapidly model and iterate the design using C functional specifications and then create a target-aware RTL architecture. In a video design example, the C model accelerated algorithm verification time by about 12,000 times. Some details of this experiment are shown in Figure 4 below.


Figure 4. Simulation time for RTL and C models. Image courtesy of Xilinx.

The Vivado HLS design flow is illustrated in Figure 5.


Figure 5. The Vivado HLS design flow. Image courtesy of Xilinx.

High-Level System Integration

Now that we have solutions to rapidly develop the required IPs, it’s reasonable to think about methods of rapidly connecting these IPs to each other. To make this possible, Vivado has the IP Integrator (IPI) which allows the user to graphically describe the connections between the IPs.
A designer can construct the connections at either the interface level or the port level. Choosing to work at the interface level enables the ability to group a large number of individual signals that are used for a common function—and easily manipulate them.

For example, it's possible to use a single connection to connect all the signals of the interface. Alternatively, the design rule checking (DRC) capability of the tool can make sure that the connections of the interface are correct. Hence, the design will be correct-by-construction. Such DRCs on complex interfaces can significantly accelerate the design assembly stage.
In addition to the intelligent auto-connection of key IP interfaces, IPI supports parameter propagation between the connected IPs. The concept of IP parameter propagation is shown in Figure 6.


Figure 6. IP parameter propagation. Image courtesy of Xilinx.

Assume that the data bus width of IP1 has a default value of 32 bits. Now assume that the user connects IP1 to IP0 which has a bus width of 64 bits. In this case, the parameter propagation rules of the tool can detect that the bus width has changed. The user may either allow the IPI to automatically update the bus width of the IPs or direct the program to simply display an error as an alert of the potential issues in a design.




This article only briefly reviewed some of the FPGA design features of the Vivado Design Suite. If you’re familiar with similar capabilities in other tools, please share your experiences with us in the comments below.

World’s Smallest Single-Atom Transistor Functions at Room Temperature, Sans Semiconductor

$
0
0
Researchers have created a transistor that uses no semiconducting material and operates at room temperature—at just one silver atom across.

Researchers at the Karlsruhe Institute of Technology (KIT) in Germany announced what they say is a breakthrough in micro-electronics, creating a single-atom transistor that they claim is the smallest in the world.

The transistor, which operates in a gel electrolyte, works at room temperature and operates on extremely low energy consumption, according to the research team led by Dr. Thomas Schimmel, a physicist at the Institute of Applied Physics at Karlsruhe (APH).

“This quantum electronics element enables switching energies smaller than those of conventional silicon technologies by a factor of 10,000,” Schimmel, an expert in physics and nanotechnology, said in a statement.


The transistor and the gel electrolyte are shown above. Image used courtesy of Professor Thomas Schimmel / KIT

Schimmel, who does research at the APH, the Institute of Nanotechnology, and the Material Research Center for Energy Systems (MZE) of KIT is widely considered the pioneer of single atom electronics. Earlier this year, he was named co-director of the Center for Single-Atom Electronics and Phonetics, a center established in partnership between KIT and ETH Zurich.

The researchers said the single atom transistor works using an entirely new technical approach, as no semiconductors are used and the transistor is made entirely of metal, which they say allows very low electrical voltage and low power consumption.

This is a continuation of research done by Schimmel and others for KIT, announced as a single-atom transistor based on silver in 2010.

 


A representation of a concept from Schimmel's 2010 research on the single-atom transistor using a silver atom. Image used courtesy of KIT.

 

Atom-Level Transistors at Room Temperature

Gerhard Klimech, a professor of electrical and computer engineering at Purdue University, said the potential breakthrough here would be the fact researchers were able to achieve these results without using subfreezing temperatures.

“If it’s true they are doing this at room temperature, its a huge achievement,” he said.
In 2012,  Klimeck was part of an international team of researchers, including Purdue, the University of New South Wales, the University Melbourne, and the University of Sydney, who developed what was then considered the world’s smallest transistor using a single phosphorus atom. At that time, the single-atom transistor had to be kept in a state of extreme cold, or the equivalent of liquid nitrogen, at minus 391 degrees Fahrenheit (minus 196 degrees Celsius).

At the time, Intel’s most advanced chip, called the Sandy Bridge, used a manufacturing process that placed 2.3 billion transistors 32 nanometers apart. The single phosphorus atom, however, was only 0.1 nanometers across.

Klimeck, however, did raise questions about exactly how much more computing you could do because of the way single atoms charge.

Scott Dunham, a professor of electrical engineering at the University of Washington, questioned whether the announcement actually overhyped the work done by the researchers. He says the claim that you can get a factor of 10,000 power, you would have to reduce the voltage by a factor of 100.
“The current device is far from single-atom,” he said. “The other thing is their switching times are in seconds, which is pretty absurd.”

Rob Enderle, principal analyst with the Enderle Group, says the announcement could eventually lead to dramatic changes in everything from sensors to mainframes, but cautioned that any tangible updates in these products remain years into the future.


“This is a proof of concept, which typically would mean production would be at least five, but more likely 10 years or more out in products that you could buy,” he wrote in an email.

The Convergence of Automotive Trends: A Conversation with GaN Systems CEO Jim Witham

$
0
0
The automotive landscape is changing rapidly. GaN Systems CEO Jim Witham spoke with AAC about the unique challenges of efficient power in automotive applications and how autonomous vehicles represent a convergence of major trends across the industry.

The term "mobility" has taken on a new significance in recent months and years, extending beyond the concept of moving from point A to point B. From the halls of CES where "mobility as a service" reigned supreme to the multi-billion dollar investments in developing autonomous vehicles over the past several years, mobility has become an important concept to the electronics industry.
While this increased attention on mobility has naturally resulted in a more intense focus on automotive applications, it turns out that many major trends in the industry are converging in this sphere, including data centers, renewables, and new methods of charging electric systems.


Toyota president Akio Toyoda at unveiling the e-Palette concept for mobility-as-a-service at CES 2018

GaN Systems CEO Jim Witham believes that a big portion of the evolution of electric vehicles and autonomous vehicles (EVs and AVs) will hinge on higher levels of efficiency. From his perspective, achieving this efficiency will require the use of GaN, or gallium nitride, a semiconductor that's been posed as an alternative to silicon and can allow for smaller, lighter power systems.

In a recent trio of videos, Witham discusses the concept of mobility, EVs, and AVs with Uwe Higgin of BMW i Ventures, a venture capitalist group for incubating innovative technologies, a program in which GaN Systems participates.

Witham spoke with AAC to expand that conversation and lay out how EVs and AVs are more than just a particularly progressive portion of the automotive industry. EVs and AVs represent a new era in electronics, weaving together several industry trends that have been developing for years—an era in which Witham believes GaN will be crucial.


Jim Witham, CEO of GaN System

The Power Electronics Revolution

"We're in the midst of a revolution happening in power electronics," Witham begins. From the rise of the internet and the availability of affordable memory to the surge in mobile computing devices, the technological landscape has been changing quickly (and sometimes drastically) year over year. According to Witham, the two biggest areas that have grown due to these changes have been the development of data centers (particularly due to a rise online activity spurred by an expanding IoT) and the electric vehicle.

For someone like Witham, whose business deals intensively with power systems and their efficiencies, it's not hard to see how these particular trends are tied closely to power. But, he says, the scope of the issue of efficiency goes beyond the challenges of creating an effective power source. "[Efficiency] means you've got to be efficient with your energy and how it's used. But it also means you've got to be efficient with your materials—your copper and your aluminum and your printed circuit boards. You want to make them as small as possible so you minimize the amounts of materials. When you get down to it, it's all about circuits."

GaN Systems has built a reputation off of the idea that gallium nitride transistors are going to be instrumental in increasing efficiencies for power applications. Over the last several years, GaN has been gaining traction and, in some ways, threatening silicon's dominance in the industry. Especially in the last year or two, it seems that major corporations like Texas Instruments, Analog Devices, Dialog Semiconductor, Qualcomm, and others have been investing in this alternative semiconductor and releasing GaN components and modules.

Current applications for GaN are wide-reaching. It's been noted for its results as a semiconductor used for RF applications, including RF power amplifiers. It's also made possible things like 99% efficient inverter power stage designs.

Witham believes that GaN will be key for many upcoming trends and applications. "...with GaN transistors, you can make things that are four times smaller, four times lighter in weight, four times less energy as heat. They can make the overall system cost cheaper because of that. It's really a driving force for providing the vision for those things that are changing in our society."


An example of a GaN transistor. Image courtesy of GaN Systems.

One of those society-changing concepts is how the automotive industry is rapidly evolving, especially in its move towards electrification and autonomous vehicles.

The Changing Landscape of Automotive Applications

In a general sense, vehicles are getting more high-tech. This is perhaps an unsurprising development as even refrigerators and toasters are becoming advanced enough to require cybersecurity measures. Tech companies, then, often view automotive as an important vertical for product development.
Witham says that GaN Systems, for their part, has viewed automotive as a basic building block in their company's focus for years: "We segment the market into four areas. One is consumer, one is data center, one is industrial, and the fourth is automotive or transportation." Transportation as a concept can, of course, include "everything from satellites to scooters, drones, forklifts, e-bikes. You name it. If it moves, people want to have lightweight power electronics in it—so it really cuts across all the venues, not just cars."

The real focus for transportation in recent years, however, is indisputably cars. Companies around the globe have been investing in technologies such as machine learning algorithms, longer-lived battery designs, and LiDAR sensors, all with automotive in mind. "You see it in places like the Consumer Electronics Show in Las Vegas, which used to be about TVs and computers but is now the biggest car show in the world."


A MobilEye display at CES 2018 shows off sensor innovations for automotive applications.

But while it may not seem extraordinary for a semiconductor company to consider automotive a major vertical, it's worth considering that automakers may not be quite as prepared to make technology a major part of their overall strategy. Automakers have largely enjoyed a relatively stable demand for their products. In recent years, however, Witham has noted that the tech industry has been placing increasing pressure on these automakers as technology companies inch further into the automotive space.

On one hand, more advanced technologies in cars have buoyed automakers with more attractive features to build into their products. On the other hand, the life cycle of the typical vehicle is much longer than that of the typical smartphone. With computing systems increasingly appearing in vehicles, it seems inevitable that industry-wide shifts will need to occur to allow automakers to keep up.

When asked if he thought the automakers will need to adjust lifecycles to keep pace with innovations in tech, Witham said, "I think they have to. I think they realize it. It has a lot to do with the non-car companies that are getting into this marketplace: the Googles and the Apples and the people like that who are used to these fast product cycles and fast turnarounds. They've flipped the whole model upside down. So the automotive companies know they have to react or they could out-innovated by the other guys. If they're on a five-year design cycle and the other guys are on a three-year design cycle...Wow, that's two and a half models out in the same time period. That's not good."

"The automotive companies know they have to react or they could be out-innovated by the other guys."

Of course, it isn't a simple matter of technology easily slipping into the automotive realm. Automotive applications face several unique challenges, including rapidly changing usability expectations and starkly unchanging safety expectations.

More Tech, Same Form Factor: Expectations for Electric and Autonomous Vehicles

While it may not necessarily look it on the outside, a high-end car released today is markedly different from one released 20 years ago. The concept that a vehicle can connect wirelessly to a smartphone is no longer a novelty but rather an expectation. Now there may also be expectations for a backup camera with predictive guidance, lane sensing and correction, capacitive touch-enabled infotainment consoles, and a whole host of other integrations of technologies that are just becoming common in consumer items.

All of these "bells and whistles" require power, making efficiency and power conversion more important than ever.

"It's only going to get more so, right?" Witham quips. "Those [features] are all kind of new things that you have got to charge. You've got to provide power for those things. They all have to go into a car but we don't want our power shape to grow. We have a basic shape. There's an SUV shape and there's a sedan shape and there's an economy car shape. They define what's put on the road. You can't grow that. But all the other stuff has to get smaller in order to be able to put more in there. I don't ever get to the point of "it's small enough" in the automotive world. We always want to strive for an extra cubic inch because that can be used somewhere else."


BMW's ConnectedDrive is an example of inter-device connectivity. Image used courtesy of BMW.

Despite the fact that cars provide such large form factors, space is still prime real estate when it comes to the size of electronics. At present, this sets automotive apart from other applications where "small enough" is still an important concept.

"A big marketplaces for us is solar renewable energy. It's kind of like a panel is a certain size and, once the electronics get small enough, it really doesn't matter anymore because you can stick them under the panel and so added shrinkage doesn't really help. Not so in the automotive industry. Keep striving for zero."

Safety and Reliability

So what sets automotive apart from other applications when it specifically comes to power?
"Probably the biggest difference is the reliability and quality," says Witham. "When we make and design transistors for the automotive market, we make things that can handle higher temperatures, more extreme cycles, higher voltages, higher current. We do this because we've supplied to the automotive industry before and we know that when the components last longer, then the subsystems last longer and the cars last longer themselves. When I first drove a car, cars broke a lot more than they do today. I'm pretty impressed with how mine, my family's, or my friends' cars last and how little they need to be repaired today compared to the old days. It's because, bottom line, the components are built to be really high quality and more reliable. We see that in spades with the automotive industry and with the amounts of testing we do and the amount of scrutiny we get into with our customers."

Reliability is also a major pain point when it comes to both AVs and EVs.
"For EVs, it's just standard automotive. People expect their cars to last for ten years and drive hundreds of thousands of miles and you've got to continue to do that with an EV just like you did with an internal combustion engine," says Witham.

But he doesn't necessarily see this expectation as a negative.

"There's always a drive for smaller, lighter, and more efficient. We're going to push up the efficiency higher and higher. By driving that efficiency up, you get really huge changes for the car because you can take that energy and instead of wasting it as heat, you can drive the car further. Or you can take batteries out of the car because batteries are heavy and expensive... When you don't burn up the energy as heat and you utilize it usefully, then you can also reduce the cooling system. If we can take three quarters of the wasted heat and put that back into useful energy and only have one quarter left, we can make the cooling system one quarter the size and that has these added benefits. So driving up the efficiency higher and higher gives us longer-distance cars, fewer batteries in the car, and smaller cooling systems which all make for a better vehicle."

Tying Trends Together for Sustainable Autonomous Vehicles

In March, GaN Systems showcased several demos at APEC, including applications related to renewables (such as solar), EVs, and data centers. As it turns out, all of these topics are important for autonomous vehicles.

Those same companies that are investigating entering the automotive sphere have been vastly expanding their data centers. The amount of data processing and storage necessary to support millions of autonomous vehicles is staggering. In broad terms, data must be gathered from sensors and systems in a vehicle, processed, run through decision-making programs, and then fed back to the car's systems to execute actions. While some of this processing must necessarily be done in the car, itself, sending data back to company data centers is important for exposing machine learning algorithms to the datasets that will allow them to make better driving decisions.

In 2016, Intel claimed at CES that a single autonomous vehicle could require four terabytes of data per day. This makes data center efficiency important to a sustainable future for autonomous vehicles; according to Witham, companies like Google but also automakers like BMW are expanding their data centers.

But efficiency issues also include conversations about renewable resources.
"I had an 'aha' moment when I was visiting a couple of other companies and we were talking about CO2 emissions," says Witham. "EVs don't make any sense at all if you're going to make your electricity using coal and oil. You've got to have wind and solar and hydro in order to make the electricity to power those electric vehicles, to power those data centers, and to crunch the data. If you tie all of those together, it really makes a great story of the future. If you don't and any one of those breaks down, the whole vision breaks down. These are all interrelated concepts and I feel like GaN Systems is doing its little part to make all the pieces happen and make the step forward for mankind in the power industry."



These power issues can have far-reaching effects on everyday life. Witham thinks of mobility as an important issue for social responsibility. A huge portion of our lives are spent in transit, he argues, and the repercussions of making the power systems behind transportation more efficient and available are huge.

"I think how we use our cars can be socially responsible. There's this huge gain to be had for society. We make the transistors that make not only the electric vehicle go but also a lot of the sensors and compute power that go with an autonomous vehicle. We can play a pretty big role in making that happen and making the assets more useful and making the time more useful to the people and the goods that are in front of us."

Jim and Uwe have been releasing a series of articles on the topics of EVs and autonomous vehicles to accompany their video trio. You can see the most recent article here.

You can watch the first part of Jim's conversation with Uwe below:



SiFive Announces Open Source-Focused SoC Development Platform Based on RISC-V and NVDLA

$
0
0
SiFive announces an open-source SoC platform based on RISC-V and NVDLA architectures.
Yesterday, SiFive, a fabless semiconductor company that produces chips based on RISC-V, announced a new open-source SoC (system-on-chip) development platform based on the RISC-V and NVDLA architectures.

RISC-V is an instruction set architecture (ISA), like x86 or the ARM architecture, that's gained traction in part because it's open source in nature.

What Is NVDLA?

NVDLA (NVIDIA Deep Learning Accelerator), for its part, is an accelerator specifically designed to use modular architecture. According to NVIDIA's primer on NVDLA, "NVDLA hardware is comprised of the following components", listing a convolution core, a single data processor, a planar data processor, a channel data processor, and dedicated memory and data reshape engines, each independently configurable and dedicated to different tasks that a system may or may not require.


Two examples of NVDLA systems. Image courtesy of NVIDIA.

NVDLA has been open source for over a year. It's been showing up in various releases, including Arm's Project Trillium and, earlier this month, Marvell's "AI SSD controller proof-of-concept architecture solution"—both projects aim to aid in scaling data management by bringing machine learning into processing.

The IP Problem of Developing Custom SoCs

In an interview with AAC last December, Shafy Eltoukhy, SVP and GM of SiFive's SoC Division, explained why IPs can present such a hurdle for SoC developers: "...you cannot build a chip by itself based on your idea alone. You really need to be able to use third-party IPs with your own IP so that you can differentiate yourself—a large portion of the costs of building an SoC are the third party IPs...By the time you add the costs of all the IPs up, you may end up with a few million dollars just to license IPs from third parties."

SiFive has invested in custom SoC development with its DesignShare program, which aims to help SoC designers select and engage with various IPs without incurring prohibitive costs. By developing and building SoCs with open-source architectures like RISC-V and NVDLA, the company hopes to broaden the program's accessibility and scope.

Just today, SiFive also announced a new addition to the DesignShare program, ASIC Design Services, which will bring CDL (core deep learning) technology to the program.

Coinciding Machine Learning Announcements from NVIDIA

Yesterday, NVIDIA highlighted its new generation of their GPUs, the RX 2000. The NVIDIA "Turing chip" has been anticipated in the consumer market due to its ability to produce high-quality graphics for applications like gaming. It is so named because the RX 2000 GPUs feature what they call "Turing Tensor Cores" (which may sound familiar to those who read about Google's TPUs, tensor processing units).

Because tensor-based chips allow for more powerful processing (NVIDIA claims its Turing RTX graphics cards are six times more powerful than its previous generation, the Pascal-based GTX series), they've become a veritable buzzword among those looking to further bring machine learning and AI capabilities out of the lab and into the consumer market.


NVIDIA's accounting of the development of AI, machine learning, and deep learning. Image courtesy of NVIDIA

Paired with the use of NVDLA in emerging machine learning initiatives, this is evidence that NVIDIA has ambitions of becoming core to the AI revolution as it goes from the realm of research to the realm of hardware.

Do you have experience in SoC development or machine learning? Give us your perspective on this week's news in the comments below.


Featured image courtesy of SiFive.

History of the ISA: Processors, the PowerPC, and the AIM Triple-Threat

$
0
0
Continuing our series on the Instruction Set Architecture (ISA), this week we delve into the PowerPC ISA.

PowerPC was the outcome of the AIM Alliance founded in 1991: a trio that consisted of unlikely partners, Apple and IBM—who otherwise were competitors—and Motorola, who had a strong relationship with Apple. The three companies developed a family of CPUs based around the PowerPC architecture, in an attempt to create a neutral hardware platform that could be used with a variety of operating systems and applications.

The alliance lasted until about 2005, but PowerPC is still maintained by the POWER Foundation, with a consortium of about 40 companies. Here is a brief history of PowerPC and the AIM Alliance.

IBM 801—Fast Core CPUs

PowerPC’s roots stem from the IBM 801, one of the earliest RISC-based processors that came out of a research project led by John Cocke in the mid-70s. The approach to the IBM 801 was to re-think how processors were designed and focus on improvements in terms of miniaturization and speed. Cocke’s team began with analyzing the trace of programs being run on the IBM System/370 mainframe computer, which could tell them more about where bottlenecks were in processing times.

John Cocke with an IBM 801 prototype. Image courtesy of IBM.

One of the conclusions the team came to was that processors were more complex than they needed to be to accommodate a number of instructions, many of which were rarely used. Part of the IBM 801's optimization was reducing its instruction set to under 100 essential commands, whereas other CPUs like Intel’s 8086 had more than 400. The simplification of the instruction set also allowed the IBM 801 to implement microcode for a broader variety of machines, while also reducing the size of the CPU.

The resulting design would operate at 15 MIPS, ahead of its time for performance and design. IBM would continue on to work on the PowerPC platform based on what the company had learned from the IBM 801.

Apple’s Secret Projects

While IBM was making headway with its new processor design, Apple was becoming concerned with its relationship with Motorola, which had been providing the processors for its Macintosh lineup. The 680x0 processors were no longer keeping up in performance, especially with emerging (and competing) RISC designs.

Porting the Mac OS to a new CPU was not trivial as large parts of it were written in assembly for the Motorola 680x0 processors to make it faster and use less memory. Mac OS also relied heavily on ROMs for the GUI. However, this did not prevent Apple from exploring their options.

First, the company began the Aquarius project, developing a four-core experimental RISC processor. Expert designers, nearly 50 engineers, and a supercomputer were allocated to the project—but, even with this power behind the project, very little progress was made.

In the early 90s, Apple then undertook another project dubbed “Star Trek”, porting the Mac OS to the Intel 80486, a CISC-based processor. Apple was now competing against Windows-based PC computers that were cheaper.

By 1992, Apple gave a demo of Mac OS running natively on an Intel-based PC, showing it could be done.

However, a change of leadership would also change the direction of Apple’s efforts. IBM’s offer to help Apple complete one of its projects, a new OS called Pink, in exchange for using the PowerPC processor in its computers. IBM also made the PowerPC 601 CPU bus to be compatible with the Motorola 88100 RISC processor. This made porting Mac’s OS to the PowerPC CPU easier since less of the operating system’s code had to be re-written, as well as brought in all the benefits of RISC processing.

The Power Apple featured the G5 PowerPC CPU. Image courtesy of Apple.

Motorola was brought on board to manufacture the PowerPC CPUs for Apple—a highly favorable agreement for a company like Motorola that wanted to maintain its relationship with Apple and continue making progress with RISC CPUs.

Thus, the AIM Alliance formally came into existence. Apple’s computers would use PowerPC CPUs for nearly a decade before eventually switching to Intel processors in 2005.

The Fate of PowerPC

While all three AIM Alliance companies were producing systems based around the PowerPC CPU, only Apple’s products experienced success. The group imagined a base in which multiple flavors of operating systems could run on, with high performance, and a competitive price point. Even though the PowerPC CPUs benchmarked quite high, the ultimate downfall in desktop computing would be that many applications still weren’t made available on the PowerPC platform. Eventually, the alliance would break apart when Apple chose to partner with Intel, and Motorola spun off its manufacturing to Freescale Semiconductors.

The PowerPC platform is still in existence today, primarily in embedded systems applications. PowerPC processors are used by Freescale and Xilinx, and video game consoles including the GameCube, Wii, Xbox 360, and Playstation 3. The RAD750, a radiation-hardened PowerPC based CPU, also runs on spacecraft such as the Mars Curiosity Rover and the Juno Spacecraft which studied Jupiter.




Do you enjoy history articles on AAC? Let us know what you'd like to see in the comments below.

Civilian rifles

$
0
0
 Firearms are just tools, developed by humans and for humans through centuries to accomplish various tasks. These tasks may vary, but in my opinion firearms are as legitimate for civilian purposes as anything else, and according to statistics on accidents in many countries, firearms are less dangerous than automobiles.

 Of various uses of firearms, I put the self-defense as most important for civilians. Self-defense is an essential human right, and no police, no matter how well-equipped and manned it is can't protect everyone at same time. Usually, when speaking on self-defense, handguns come to the mind first. However, handguns have limited effective range, limited stopping power (especially when used against wild and dangerous animals in the woods or the like), and also often hard to master. Long guns, such as shotguns or carbines, especially those of light weight and soft recoil, are much easier to fire accurately over any ranges beyond "an arm's length". Shotguns and handguns are discussed elsewhere on this site, so this section will mostly concentrate on rifles and carbines, suitable for self-defense (against humans and wild animals), home defense, as well as for general practice and recreational shooting (plinking). Dedicated sport / target and hunting weapons will be left aside, at least for a while, because either subject is too broad to be covered in available time.

Self-defense is an essential human right; a compact carbine is a good defensive tool
image: Oleg Volk
 
Recreational shooting (plinking) is a lot of fun; just use the gun safely!
image: Oleg Volk
 
Hunting can be fun, challenge, and a great addition to your table
 
Sport shooting is recognized as an Olympic sport
 
And please always remember four rules of gun safety!

Rule # 1. Treat all guns as if they are loaded. Always check the gun before handling.


Rule # 2. Never let the muzzle of a gun point at anything you do not want to destroy or kill – always point the gun in safe direction.


Rule # 3. Keep your finger straight and off the trigger, unless you have aimed your gun and ready to shoot.


Rule # 4. Be absolutely sure of your target, and what is behind it.



Good shooting!

Kalashnikov AK-308 assault rifle (Russia)

$
0
0
Kalashnikov AK-308 assault rifle is a new development of the Kalashnikov concern. Developed upon requests from several foreign customers, this rifle fires powerful 7.62×51 NATO ammunition and is intended for countries where this round is still in widespread use, as well as for various Special Forces that in certain tactical situations might prefer power, range and penetration of the 7.62×51 ammunition over lighter intermediate rounds, such as 5.56×45 or 7.62×39. It was first presented during Army 2018 expo and is still in development.


Kalashnikov AK-308 assault rifle
Kalashnikov AK-308 assault rifle

Kalashnikov AK-308 assault rifle is based on the new 5.45mm AK-12 assault rifle, with the basic design being stretched and strengthened to accept noticeably more powerful ammunition. It features same Kalashnikov gas operated, long stroke piston, rotary bolt system. Receiver layout also follows famous Kalashnikov pattern, with rigid stamped steel construction and removable top cover. The latter is attached to the receiver by the captive cross pint at the front, and features spring-loaded element at the rear to ensure stable and repeatable position of the integrated Picatinny rail (with sighting equipment attached to it) over extended field use and maintenance. Controls also follow traditional AK pattern, with safety / fire selector lever on the right side. AK-308 rifle can fire single shots and full automatic, feeding ammunition from detachable box magazines made from impact-resistant polymer. Magazine capacities are 20 or 30 rounds. Rifle features AK-12-style non-removable gas tube design with frontal maintenance plug. Aluminum alloy forend is attached to the receiver and relieves the barrel from all stresses that might ensure from various holding or supporting positions.


Kalashnikov AK-308 assault rifle
Kalashnikov AK-308 assault rifle

Barrel and chamber are chrome-lined for durability and ease of maintenance. Barrel is provided with effective muzzle device and bayonet lug. A quick-detachable tactical sound suppressor can be used when required. Pistol grip is made from polymer and has a small storage container in its base. Shoulder stock is of sturdy side-folding, telescoping design. Cleaning kit is stored in the shoulder stock, accessible via sliding buttpad. Standard sights include protected front post, mounted on the gas block, and an adjustable aperture rear sight, with range settings for 100 – 800 meters. Additional sighting devices (red dot, telescope and/or night sights) can be attached to the Picatinny rail on the top cover and on the forend.
AK-308 rifle is capable to host a 40mm underbarrel grenade launcher and other useful tactical accessories.

Video from Kalashnikov.media that shows 1st prototype of the AK-308 rifle, based on the AK-103. Latest prototype, shown above, is based on a more modern AK-12

SpecificationValue
Full text nameKalashnikov AK-308 assault rifle (Russia)
Caliber cartridge7.62x51 NATO / .308 Winchester
Action typeselect-fire
Trigger typesa
Overall length, mm885-945
Length, folded, mm690
Barrel length, mm415
Weight empty, kg4.1
Magazine capacity, rounds20, 30
Cyclic rate of fire, rounds/min700


PCB Design: How to Choose a PCB Manufacturer

$
0
0

Choose a PCB Manufacturer

This is by no means an attempt to provide a comprehensive list of fab houses that you might want to use for a low-quantity order; that information is readily available via Google. Instead, I want to offer some suggestions based on my knowledge and experience. I’m not going to comment on the quality of the boards produced by a given manufacturer because I’ve never in my life received a PCB that I would describe as poorly manufactured (I don’t know if this is because I’m lucky or because I have a knack for identifying fab houses that are serious about quality). I think it’s safe to assume that all of the following fab houses will deliver a fully functional PCB.

Oshpark

The primary advantage is the price, which is excellent. Oshpark orders are subject to constraints that, depending on your circumstances, could be completely unacceptable, completely insignificant, or anywhere in between. The lead time is also quite long, but again, if you’re not in a hurry this is a non-issue. I really appreciate their user interface: it’s intuitive and straightforward, and it provides immediate visual feedback combined with informative explanations. For example:



Oshpark accepts KiCad and EAGLE project files.

Advanced Circuits

In contrast to Oshpark, Advanced Circuits is a longtime industry leader that offers the full range of advanced features and purchasing options. Nevertheless, they are not opposed to low-quantity orders, and they have special deals for two- and four-layer boards that conform to certain restrictions. Two things that set Advanced Circuits apart, at least in my mind, are the FreeDFM file check and their free CAD software, called PCB Artist. FreeDFM checks your Gerber files for issues that could delay or prevent proper manufacturing; some things are even automatically corrected.
Here’s an example from the Advanced Circuits website:



PCB Artist is a fully featured CAD tool that is completely free. The “compromise” here is that you have to manufacture the board through Advanced Circuits. If I understand correctly, the software will generate standard Gerber files (which can be sent to any fab house) after you have made the initial order with Advanced Circuits. Overall the restriction seems fairly reasonable.

Sunstone Circuits

This manufacturer is also in the “longtime industry leader” category, but they encourage small orders and offer a low-cost prototype ordering option. Unlike Oshpark, which has a minimum time-to-shipment of five business days (and that applies only to two-layer boards), Sunstone can ship your prototypes the next day, or maybe even the same day, if for some reason you’re in a desperate rush. Like Advanced Circuits, Sunstone offers a completely free CAD tool (called PCB123) that seems to be quite impressive. I’ve never tried it, but the website claims that it has been used by corporations and institutions as prestigious as Intel, Honeywell, and the United States Naval Research Laboratory. If you have any experience with Sunstone or PCB123, please leave a comment and let us know what you think.

Choose an Assembly Method

After so much time and effort invested in designing, checking, and manufacturing a printed circuit board, it can be depressing to realize that you have succeeded in arriving at the most difficult part of the production process. This is not always the case, but nowadays—i.e., the age of small (if not minuscule), densely packed surface-mount components that sometimes do not even have protruding leads—the issue of physically attaching parts to the board can present the greatest challenge to low-quantity PCB fabrication.

You have four options: professional assembly, DIY reflow, hot-air-gun soldering, and hand soldering (i.e., with a soldering iron). I’m not going to discuss hand soldering because in most cases it will be difficult, impractical, or downright impossible (though the soldering iron can certainly be useful for minor rework tasks). Let’s take a look at the other three options.

Professional Assembly

The primary obstacle here is cost. Automated assembly technology is mature and highly reliable, but the process is not economically adapted to low-quantity orders. It also requires additional manufacturing data:
  • BOM information: The assembly house needs information that enables them to either order the parts or organize the parts that you provide.
  • Solder mask: You need to submit Gerber files that identify the areas of the board (e.g., pads for IC pins) that must receive solder during the process of solder-paste deposition. (Solder paste is the type of solder used in reflow assembly.) This can be a bit complicated because in some cases you need to create solder-mask divisions that divide one large pad into multiple smaller rectangles of solder paste (see this article for more information).
  • Placement data: This includes spatial coordinates and rotation for every part on the board. The machine cannot place components if it doesn’t know which ones go where and how they are oriented.
In some cases you might decide that assembling the board yourself is less trouble than generating and double-checking all this extra information.
I know of only one company (MacroFab) that can perform low-quantity automated assembly at a reasonable cost. If you’ve found any other way to obtain similar services at a similar price, please let us know in the comments.

DIY Reflow

This approach is surprisingly feasible. The general idea is that you deposit the solder paste, place the components, bake the PCB in a repurposed toaster oven, and voilà. AAC already has quite a bit of information on this topic:

Manually depositing solder paste onto tiny, closely spaced pads is not easy; you may want to consider using a stencil, which is a flat object with openings corresponding to the solder-paste locations. Stencil-based DIY reflow is discussed in this article. The following image shows the excellent solder-paste deposition achieved by an AAC contributor with the help of a low-cost polyimide stencil.



Hot-Air-Gun Soldering

The only difference between hot-air-gun assembly and DIY reflow is the heat source, which is a good reason to become proficient in depositing solder paste and accurately placing components—once you’ve completed those steps, you can choose between reflow and hot air according to other factors. Hot air is convenient and highly effective for smaller boards. Reflow requires a more elaborate setup, but it heats the entire board evenly and gives you more control over the temperature profile. A reflow profile is a graphical representation of how the temperature should increase and decrease during the reflow procedure; for example:



One question that comes to mind is the proper hot-air temperature. I haven’t found authoritative information on choosing temperatures for hot-air soldering, so you might have to do some experimentation. This is the most specific information that I have at the moment: Recently I assembled a board using low-temperature bismuth-based solder that has a melting point of 138°C. I used a hot-air temperature of 180°C to warm up the board, then I increased to 270°C for the actual soldering phase.

You can read more about hot-air-gun soldering here, and this article (scroll down to the “Manufacturing Considerations” section) briefly discusses bismuth-based solder (which I highly recommend).




What assembly methods have worked best for you? Share your experiences in the comments below.

Guide to PCB Design: From PCB Schematic to Board Layout

$
0
0
This article gives a high-level view of the basics of preparing a schematic for custom PCB fabrication.

There is no doubt that schematic creation and PCB layout are fundamental aspects of electrical engineering, and it makes sense that resources such as technical articles, app notes, and textbooks tend to focus on these portions of the design process. We shouldn’t forget, though, that schematics and layouts aren’t very useful if you don’t know how to turn your finished design files into an assembled circuit board. Even if you’re somewhat familiar with ordering and assembling PCBs, you might not be aware of some options that could help you to achieve adequate results at lower cost.
This article is intended for anyone who is interested in (or might someday be interested in) manufacturing and assembling small quantities of high-quality PCBs. By “manufacturing” I mean “paying a company to manufacture”—I will not discuss DIY fabrication of PCBs, and I can't honestly recommend that approach. Professional PCB fabrication is so affordable and convenient these days, and in general the results are far superior.

I’ve been doing independent and low-quantity PCB design for a long time, and I’ve gradually acquired enough relevant information to put together a reasonably comprehensive article on the subject. Nonetheless, I am only one person and I most certainly cannot know everything, so please do not hesitate to expand upon my work via the comments section at the end of the article. I sincerely appreciate your contributions.

The Basic Schematic

A schematic consists primarily of components and wires connected in such a way as to produce the desired electrical behavior. The wires will become traces or copper pours.

The components include a footprint (AKA land pattern), i.e., a collection of through-holes and/or surface-mount pads that match the terminal geometry of the physical part. A footprint can also have lines, shapes, and text that are collectively referred to as the silkscreen. These show up on the PCB as purely visual elements; they’re not conductive and do not affect the functionality of the circuit.
The following image provides an example of a schematic component and the corresponding PCB footprint (the blue lines indicate the footprint pad to which each component pin is connected).



Converting a Schematic into a PCB Layout

A completed schematic is converted by CAD software into a PCB layout consisting of component footprints and ratlines; this rather unpleasant word refers to electrical connections that have not yet been converted into physical connections.

The designer arranges components and then uses the ratlines as a guide for creating traces, copper pours, and vias. A via is a small through-hole that carries an electrical connection to a different PCB layer (or to multiple layers—e.g., a thermal via might connect to the internal ground plane and a ground-connected copper pour on the bottom of the board).

Verification: Identifying Issues in PCB Layouts

The final step before the beginning of the manufacturing stage is referred to as verification. The general idea here is that the CAD tool attempts to find layout mistakes before they negatively affect the board’s functionality or interfere with the manufacturing process.
I’m familiar with three types of verification (though maybe there are more):
  • Connectivity: This ensures that all portions of a net are connected by a conductive structure of some kind.
  • Consistency between schematic and layout: This is fairly self-explanatory. I assume that different CAD tools have different ways of implementing this form of verification.
  • DRC (design rule check): This one is particularly relevant to the topic of PCB fabrication because design rules are limitations that you impose on your own layout in order to ensure that it can be successfully manufactured. Common design rules include minimum trace spacing, minimum trace width, and minimum drill diameter. It’s easy to violate design rules when you’re laying out a board, especially if you’re in a hurry, so by all means take advantage of the CAD tool’s DRC functionality. The following image conveys the design rules that I used for the C-BISCUIT robot control board.


PCB features are listed horizontally and vertically. The value at the intersection of the row and column corresponding to two features indicates the minimum separation (in mils) between those two features. For example, if you look at the row corresponding to “Board” and then go over to the column corresponding to “Pad,” you see that the minimum separation between a pad and the edge of the board is 11 mils.

Guide to Ordering and Assembling Printed Circuit Boards

This article is part of a series. Check out the rest of the series below:​
  • PCB Schematic and Board Layout​
  • How to Generate Manufacturing Files for Custom Printed Circuit Boards

Choosing the Right CAD Software Program: Where to Start

Before we conclude this portion of our guide, I want to briefly discuss schematic/PCB CAD software. If you’re feeling a bit lost amidst the various free and low-cost options, I recommend that you start with one of the following packages:

DipTrace 

DipTrace is first on the list because it’s my favorite. The prices are reasonable, it does everything I need it to, and I find the user interface to be intuitive and visually pleasing.

EAGLE

I have very limited experience with EAGLE, but it has been around a long time and it seems to be quite popular. The “standard” license is $100 per year; that’s more than I want to pay for CAD software.

KiCad 

This program is free and open source. I always avoided it, in part because I was worried about its stability. However, I recently heard from a highly qualified colleague that KiCad has grown into an excellent tool, even for professional designers. It’s definitely worth a look, especially if you’re on a tight budget.

DesignSpark

Completely free and very capable. I used this tool before I switched to DipTrace.



Are there other CAD programs you've used? Tell us about them in the comments below.

The next portion of the custom PCB manufacturing process we'll discuss is how to generate manufacturing files to submit to a fab house.

Guide to PCB Design: How to Generate Manufacturing Files for Custom Printed Circuit Boards

$
0
0

Choose a PCB Manufacturer

This is by no means an attempt to provide a comprehensive list of fab houses that you might want to use for a low-quantity order; that information is readily available via Google. Instead, I want to offer some suggestions based on my knowledge and experience. I’m not going to comment on the quality of the boards produced by a given manufacturer because I’ve never in my life received a PCB that I would describe as poorly manufactured (I don’t know if this is because I’m lucky or because I have a knack for identifying fab houses that are serious about quality). I think it’s safe to assume that all of the following fab houses will deliver a fully functional PCB.

Oshpark

The primary advantage is the price, which is excellent. Oshpark orders are subject to constraints that, depending on your circumstances, could be completely unacceptable, completely insignificant, or anywhere in between. The lead time is also quite long, but again, if you’re not in a hurry this is a non-issue. I really appreciate their user interface: it’s intuitive and straightforward, and it provides immediate visual feedback combined with informative explanations. For example:



Oshpark accepts KiCad and EAGLE project files.

Advanced Circuits

In contrast to Oshpark, Advanced Circuits is a longtime industry leader that offers the full range of advanced features and purchasing options. Nevertheless, they are not opposed to low-quantity orders, and they have special deals for two- and four-layer boards that conform to certain restrictions. Two things that set Advanced Circuits apart, at least in my mind, are the FreeDFM file check and their free CAD software, called PCB Artist. FreeDFM checks your Gerber files for issues that could delay or prevent proper manufacturing; some things are even automatically corrected.
Here’s an example from the Advanced Circuits website:



PCB Artist is a fully featured CAD tool that is completely free. The “compromise” here is that you have to manufacture the board through Advanced Circuits. If I understand correctly, the software will generate standard Gerber files (which can be sent to any fab house) after you have made the initial order with Advanced Circuits. Overall the restriction seems fairly reasonable.

Sunstone Circuits

This manufacturer is also in the “longtime industry leader” category, but they encourage small orders and offer a low-cost prototype ordering option. Unlike Oshpark, which has a minimum time-to-shipment of five business days (and that applies only to two-layer boards), Sunstone can ship your prototypes the next day, or maybe even the same day, if for some reason you’re in a desperate rush. Like Advanced Circuits, Sunstone offers a completely free CAD tool (called PCB123) that seems to be quite impressive. I’ve never tried it, but the website claims that it has been used by corporations and institutions as prestigious as Intel, Honeywell, and the United States Naval Research Laboratory. If you have any experience with Sunstone or PCB123, please leave a comment and let us know what you think.

Choose an Assembly Method

After so much time and effort invested in designing, checking, and manufacturing a printed circuit board, it can be depressing to realize that you have succeeded in arriving at the most difficult part of the production process. This is not always the case, but nowadays—i.e., the age of small (if not minuscule), densely packed surface-mount components that sometimes do not even have protruding leads—the issue of physically attaching parts to the board can present the greatest challenge to low-quantity PCB fabrication.

You have four options: professional assembly, DIY reflow, hot-air-gun soldering, and hand soldering (i.e., with a soldering iron). I’m not going to discuss hand soldering because in most cases it will be difficult, impractical, or downright impossible (though the soldering iron can certainly be useful for minor rework tasks). Let’s take a look at the other three options.

Professional Assembly

The primary obstacle here is cost. Automated assembly technology is mature and highly reliable, but the process is not economically adapted to low-quantity orders. It also requires additional manufacturing data:
  • BOM information: The assembly house needs information that enables them to either order the parts or organize the parts that you provide.
  • Solder mask: You need to submit Gerber files that identify the areas of the board (e.g., pads for IC pins) that must receive solder during the process of solder-paste deposition. (Solder paste is the type of solder used in reflow assembly.) This can be a bit complicated because in some cases you need to create solder-mask divisions that divide one large pad into multiple smaller rectangles of solder paste (see this article for more information).
  • Placement data: This includes spatial coordinates and rotation for every part on the board. The machine cannot place components if it doesn’t know which ones go where and how they are oriented.
In some cases you might decide that assembling the board yourself is less trouble than generating and double-checking all this extra information.
I know of only one company (MacroFab) that can perform low-quantity automated assembly at a reasonable cost. If you’ve found any other way to obtain similar services at a similar price, please let us know in the comments.

DIY Reflow

This approach is surprisingly feasible. The general idea is that you deposit the solder paste, place the components, bake the PCB in a repurposed toaster oven, and voilà. AAC already has quite a bit of information on this topic:

Manually depositing solder paste onto tiny, closely spaced pads is not easy; you may want to consider using a stencil, which is a flat object with openings corresponding to the solder-paste locations. Stencil-based DIY reflow is discussed in this article. The following image shows the excellent solder-paste deposition achieved by an AAC contributor with the help of a low-cost polyimide stencil.



Hot-Air-Gun Soldering

The only difference between hot-air-gun assembly and DIY reflow is the heat source, which is a good reason to become proficient in depositing solder paste and accurately placing components—once you’ve completed those steps, you can choose between reflow and hot air according to other factors. Hot air is convenient and highly effective for smaller boards. Reflow requires a more elaborate setup, but it heats the entire board evenly and gives you more control over the temperature profile. A reflow profile is a graphical representation of how the temperature should increase and decrease during the reflow procedure; for example:



One question that comes to mind is the proper hot-air temperature. I haven’t found authoritative information on choosing temperatures for hot-air soldering, so you might have to do some experimentation. This is the most specific information that I have at the moment: Recently I assembled a board using low-temperature bismuth-based solder that has a melting point of 138°C. I used a hot-air temperature of 180°C to warm up the board, then I increased to 270°C for the actual soldering phase.

You can read more about hot-air-gun soldering here, and this article (scroll down to the “Manufacturing Considerations” section) briefly discusses bismuth-based solder (which I highly recommend).




What assembly methods have worked best for you? Share your experiences in the comments below.

Design Your Own Controller for a Solder Reflow Oven

$
0
0
Continuing from the previous tutorial, this project will show you how to set up the low-level hardware to measure temperature, read the zero-cross detector, drive the TRIAC, and print to the serial terminal using a USART.

Introduction

See Part 1: Control Your AC Mains with a Microcontroller
Last time, we built the TRIAC driver and zero-cross detection circuitry to interface with 120V AC mains voltages. It's a very capable bit of circuitry, but without a proper controller, the end result wasn't all that interesting since it could only turn on or off the waveform and not dim it. In this project, we are writing C code on an Atmel ATmega328P microcontroller to accomplish several key tasks: 1. Read zero-cross signal with external interrupt and drive TRIAC with a special form of pulse-width modulation 2. Use the Universal Synchronous and Asynchronous serial Receiver and Transmitter (USART) to display debug data 3. Interface with MAX31855 thermocouple amplifier over the Serial Peripheral Interface (SPI) 4. Create a general purpose millisecond timer to help facilitate timeouts, timestamps, and non-blocking delays

Bare metal C means that we are writing very low-level code -- C is just a single step up from assembly language as far as abstraction goes. This means we'll be manipulating bits in specific registers, specifying interrupt vectors directly in our interrupt service routines (ISRs), and sometimes dealing with raw memory allocation with malloc(). There are some macros that make this process a little easier for us in macros.h (and make the code cleaner to read), but familiarity with some of the actual inner workings of the ATmega328P and the names it uses for different registers and components is very important. The complete datasheet (PDF) for the chip has all that info in it and is worth keeping on hand. Programming from the Gound Up may be a helpful resource as well for getting comfortable with low-level development.

I'm using some code from Andy Brown and his ATmega8 oven controller. There is some drop-in code reuse, some tweaked bits, and some totally different implementations. In addition to having a different controller, he wrote his code in C++ and uses a different build system, but I still want to give him full credit for the previous work he's done.

Supplies Needed

This project is mostly software, so the parts count is relatively small. You'll need:
  • 3.3V ATmega328P microcontroller board with crystal oscillator (necessary for propery USART functionality)
  • In-circuit Serial Programmer (ICSP)
    • AVR Dragon - I use this one. Lots of features and relatively cheap
    • Arduino Uno - Other main Arduino boards can be used as a programmer as well.
  • USB-Serial Adapter
    • CH340/CH341
    • FT232RL - Needs to work at 3.3v! I have this 5V model but I cut trace on the back and added a switch:

  • MAX31855 breakout
    • Home grown
    • Adafruit
  • Functioning TRIAC AC controller
  • Computer running Linux with avrdude, binutils-avr, gcc-avr, avr-libc, and gdb-avr installed. It's possible to do this on Windows or Mac but that is outside the scope of this project.

TRIAC Controller


This section is the bread and butter of the controller. The oven_control.c file consist of several parts: an oven_setup(), oven_setDutyCycle(percent), and the three ISRs to deal with different timing-critical events.

Oven Controller Initization Function
voidoven_setup(void)
{
// Setup inputs and outputs
CONFIG_AS_OUTPUT(TRIAC_EN);
CONFIG_AS_INPUT(ZERO_CROSS);

// Initial values for outputs
SET_LOW(TRIAC_EN);

// Configure external interrupt registers (Eventually move into macros.h)
EICRA |= (1<< ISC01); // Falling edge of INT0 generates an IRQ
EIMSK |= (1<< INT0); // Enable INT0 external interrupt mask

// Enable Timer/Counter2 and trigger interrupts on both overflows & when
// it equals OC2A
TIMSK2 |= (1<< OCIE2A) | (1<< TOIE2);
}
This function just sets up GPIO and interrupt conditions, as well as enabling Timer/Counter2.

Output Intensity Function
voidoven_setDutyCycle(uint8_t percent)
{

uint16_t newCounter;

// percentages between 1 and 99 inclusive use the lookup table to translate a linear
// demand for power to a position on the phase angle axis
if(percent > 0&& percent < 100)
_percent = pgm_read_byte(&powerLUT[percent - 1]);

// calculate the new counter value
newCounter = ((TICKS_PER_HALF_CYCLE - MARGIN_TICKS - TRIAC_PULSE_TICKS) * (100 - percent)) / 100;

// set the new state with interrupts off because 16-bit writes are not atomic
cli();
_counter_t2 = newCounter;
_percent = percent;
sei();
}
This function controls the output power of the oven and sets the timer wait value accordingly. The powerLUT[] array is used to map the linear percentage scale to a non-linear curve. With a linear scale, the actual power output change between 1% and 2% or 97% to 98% is significantly less than that at 50% to 51%. This is due to the sinusoidal nature of the quarter waveform we're dimming. This remapping lookup table helps to correct that -- see Update 1: improving the phase angle timing for more info. The PROGMEM attribute places the whole array into FLASH memory instead of RAM, saving space for the actual program. This will be useful for constant string storage as well later on in the series.

Zero-Crossing Interrupt
ISR(INT0_vect)
{
/* 0 is an off switch. round up or down a percentage that strays into the
* end-zone where we have a margin wide enough to cater for the minimum
* pulse width and the delay in the zero crossing firing */
if(_percent == 0)
{
OVEN_OFF();
return;
}
// either user asked for 100 or calc rounds up to 100
elseif(_percent == 100 || _counter_t2 == 0)
{
OVEN_ON();
}
// Comparison to a constant is pretty fast
elseif(_counter_t2 > TICKS_PER_HALF_CYCLE - TRIAC_PULSE_TICKS - MARGIN_TICKS)
{
// Also a constant comparison so also pretty fast
if(_counter_t2 > (TICKS_PER_HALF_CYCLE - (TRIAC_PULSE_TICKS - MARGIN_TICKS / 2)))
{
// round half up to completely off
OVEN_OFF();
return;
}
else
_counter_t2 = TICKS_PER_HALF_CYCLE - TRIAC_PULSE_TICKS - MARGIN_TICKS;
}

// Counter is acceptable, or has been rounded down to be acceptable
OCR2A = _counter_t2;
TCNT2 = 0;
TCCR2B = (1<< CS20) | (1<< CS21) | (1<< CS22); // start timer: 8MHz/1024 = 128uS/tick
}
This triggers on the falling edge of pin PD2. Depending on what the global _percent variable is set to, it will either turn the oven full on, full off, or set the Timer/Counter2 "Output Compare Register A" to a value corresponding to the "off time" after zero-cross interrupt fires. It then clears Timer/Counter2 and starts the timer.

Timer/Counter2 Comparison Interrupt
ISR(TIMER2_COMPA_vect)
{
// Turn on oven, hold it active for a min latching time before switching it off
OVEN_ON();

// The overflow interrupt will fire when the minimum pulse width is reached
TCNT2 = 256 - TRIAC_PULSE_TICKS;
}
When the output comparison value is met, this interrupt is fired and it sets the TRIAC_ACTIVE pin high and loads up the TCNT2 register so that it overflows after TRIAC_PULSE_TICKS counts later.

Timer/Counter2 Overflow Interrupt
ISR(TIMER2_OVF_vect)
{
// Turn off oven
OVEN_OFF();

// turn off the timer. the zero-crossing handler will restart it
TCCR2B = 0;
}
When the timer overflows, the TRIAC_ACTIVE pin goes low and the timer turns off, waiting for an INT0_vect to repeat the process.

USART

In normal C or C++ programming on a computer, functions like assert() and sprintf() can print formatted text to the terminal and help with debugging. In order to communicate with our device, we need to implement some way of printing to a terminal. The easiest way of doing that is through serial communication with the ATmega's USART and a USB-serial converter.

USART Initialization Function
voidusart_setup(uint32_t ubrr)
{
// Set baud rate by loading high and low bytes of ubrr into UBRR0 register
UBRR0H = (ubrr >> 8);
UBRR0L = ubrr;

// Turn on the transmission and reception circuitry
UCSR0B = (1<< RXCIE0) | (1<< RXEN0 ) | (1<< TXEN0 );

/* Set frame format: 8data, 2stop bit */
UCSR0C = (1<3<// Use 8-N-1 -> Eight (8) data bits, No (N) partiy bits, one (1) stop bit
// The initial vlaue of USCR0C is 0b00000110 which implements 8N1 by
// Default. Setting these bits is for Paranoid Patricks and people that
// Like to be reeeeeally sure that the hardware is doing what you say
UCSR0C = (1<< UCSZ00) | (1<< UCSZ01);
}
In usart.c, there is the standard usart_setup(uint32_t ubrr) initialization function that enables the hardware and establishes the baud rate (bits/second) and transmission settings (8 data bits, no parity bits, 1 stop bit). This is hard-coded to 9600 baud for now in the usart.h file.

Print Single Byte Function
voidusart_txb(constchar data)
{
// Wait for empty transmit buffer
while (!(UCSR0A & (1<< UDRE0)));

// Put data into buffer, sends the data
UDR0 = data;
}
This function accepts a single byte and when the transmit buffer is empty, loads the byte into the buffer. This is the basis for the other printing functions.

Printing Helper Functions
/*** USART Print String Function ***/
voidusart_print (constchar *data)
{
while (*data != '\0')
usart_txb(*data++);
}
/*** USART Print String Function with New Line and Carriage Return ***/
voidusart_println (constchar *data)
{
usart_print(data);
usart_print("\n\r"); // GNU screen demands \r as well as \n :(
}
Much like Arduino's Serial.print() and Serial.println() functions, these take a string as an argument and for each character, calls the usart_txb() function. usart_println() just has an extra step to print a new line and a carriage return.

Interrupt on Receive
ISR(USART_RX_vect)
{
unsignedchar ReceivedByte;
ReceivedByte = UDR0;
UDR0 = ReceivedByte;
}
Right now there is no way to meaningfully interact with the software through the USART -- ISR(USART_RX_vect) was written as a placeholder for future development. When a character is received from the USB-serial converter, an interrupt is fired and it echos that same character to the output so it shows up on the screen.

General Purpose Timer

General delay and time comparison functions are very helpful in a lot of microcontroller applications. The _delay() function in is helpful for small delays since it uses a while loop and nop instructions to do nothing for the specified amount of time. This prevents anything else from happening in the program, however. To deal with measuring longer blocks of time that allow for the program to continue, we use one of the free hardware timers and interrupts. On the ATmega328P, Timer/Counter0 is kind of gimpy and doesn't have as much functionality as Timer/Counter1 and Timer/Counter2 so it's a small triumph to be able to use it for something useful. We still have T/C1 but it would be nice to save it for something more complicated in the future.

Timer Initization Function
voidmsTimer_setup(void)
{
// Leave everything alone in TCCR0A and just set the prescaler to Clk/8
// in TCCR0B
TCCR0B |= (1<< CS01);

// Enable interrupt when Timer/Counter0 reaches max value and overflows
TIMSK0 |= (1<< TOIE0);
}
The first function is of course the initialization function. It sets the prescaler to 1 MHz and enables the overflow interrupt.

Return Current System Time Function
uint32_tmsTimer_millis(void)
{
uint32_t ms;

// NOTE: an 8-bit MCU cannot atomically read/write a 32-bit value so we
// must disable interrupts while retrieving the value to avoid getting a
// half-written value if an interrupt gets in while we're reading it
cli();
ms=_ms_counter;
sei();

return ms;
}
The msTimer functions chain together and all eventually call this function in some way. This simply returns the value of the global _ms_counter variable which is updated every millisecond.

General Purpose Millisecond Delay Function
voidmsTimer_delay(uint32_t waitfor)
{
uint32_t target;

target = msTimer_millis() + waitfor;
while(_ms_counter < target);
}
This is the delay() utility function. It accepts as an argument the amount of milliseconds you'd like it to wait for and blocks with a while() loop until finished. This should still only be used for short delays.

Time Difference Measurement Function
uint32_tmsTimer_deltaT(uint32_t start)
{
// Return difference between a starting time and now, taking into account
// wraparound
uint32_t now = msTimer_millis();

if(now > start)
return now - start;
else
return now + (0xffffffff - start + 1);
}
Measures time delta between start time and current time. Can be used for delay loops that don't block. It also accounts for wraparound -- since time is saved in a 32-bit uint32_t variable, when it reaches 0xFFFFFFFF and increments, it rolls back around to zero. This factors that in to the calculation.

Timeout Detection Function
boolmsTimer_hasTimedOut(uint32_t start,uint32_t timeout)
{
// Check if a timeout has been exceeded. This is designed to cope with wrap
// around
returnmsTimer_deltaT(start) > timeout;
}
True or false flag thrown when checking if a certain amount of time has passed. This is used in the temperature sensor so that you can call the read() function at whatever speed you want but it will only update according to its timeout interval.

Timer/Counter0 Overflow Interrupt
ISR(TIMER0_OVF_vect)
{
_ms_subCounter++;
if((_ms_subCounter & 0x3) == 0) _ms_counter++;
TCNT0 += 6;
}
The ISR running the show. Very accurately increments the global _ms_counter variable every millisecond.

Temperature Sensor


The functions and data structures used to interface with the MAX31855 temperature sensor are a little different than the previous ones. I'm using a pseudo-object oriented paradigm where there is a structure named max31855 which is defined in max31855.h:
typedefstruct max31855
{
int16_t extTemp; // 14-bit TC temp
int16_t intTemp; // 12-bit internal temp
uint8_t status; // Status flags
uint32_t lastTempTime; // "Timestamp"
uint32_t pollInterval; // Refresh rate of sensor
} max31855;
In main.c, a struct and a pointer to it are created and any time the temperature needs to be read or the values need to be printed to the USART, the struct pointer is passed as an argument to the different functions.

Temperature Sensor "Object" Constructor
max31855 *max31855_setup(void)
{
// Reserve some space and make sure that it's not null
max31855 *tempSense = malloc(sizeof(max31855));
assert(tempSense != NULL);

// Initilaize struct
tempSense->extTemp = 0;
tempSense->intTemp = 0;
tempSense->status = UNKNOWN;
// Not sure why Andy Brown makes his last temp time start at 0xFFFFD8EF but
// it works... Maybe it's to test timer0 wrap around / guarantee causality:
// https://github.com/andysworkshop/awreflow2/blob/master/atmega8l/TemperatureSensor.h
tempSense->lastTempTime = 0xFFFFFFFF - 10000;
tempSense->pollInterval = DEFAULT_POLL_INTERVAL;

// Set GPIO direction
CONFIG_AS_OUTPUT(MAX31855_CS);
CONFIG_AS_OUTPUT(MAX31855_MOSI);
CONFIG_AS_OUTPUT(MAX31855_SCK);
CONFIG_AS_INPUT(MAX31855_MISO);

// Enable pullup on ~CS
PULLUP_ON(MAX31855_CS);

// Set outputs to default values
SET_HIGH(MAX31855_CS);
SET_LOW(MAX31855_MOSI);
SET_LOW(MAX31855_SCK);

// Enable SPI, Master, set clock rate fosc/4 (already default but we're
// Paranoid Patricks over here and also like to make our code clear!)
SPCR = (1<< SPE) | (1<< MSTR);
SPCR &= ~((1<< SPR1) | (1<< SPR0)); // Not necessary............

// Super speed 2x SPI clock powerup!
SPSR |= (1<< SPI2X);

return tempSense;
}
This is the "constructor" and initialization function for the max31855 struct. It reserves space in memory using malloc() and makes sure that it's not NULL. Since there is no sprintf() built into the AVR libraries by default, if the condition is true, it just aborts the program by forcing it into an endless loop. It then configures GPIO and turns on the hardware SPI peripheral.
Read and Update Temperature Sensor Function
boolmax31855_readTempDone(max31855 *tempSense)
{
if(msTimer_hasTimedOut(tempSense->lastTempTime, tempSense->pollInterval))
{
uint8_t i; // Loop index
uint32_t rawBits = 0; // Raw SPI bus bits

// Bring ~CS low
SET_LOW(MAX31855_CS);

// clock 4 bytes from the SPI bus
for(i = 0; i < 4; i++)
{
SPDR = 0; // start "transmitting" (actually just clocking)
while(!(SPSR & (1<< SPIF))); // wait until transfer ends

rawBits <<= 8; // make space for the byte
rawBits |= SPDR; // merge in the new byte
}

// restore CS high
SET_HIGH(MAX31855_CS);

// parse out the temp / error code from the raw bits. Are switch
// statements bad? I dunno. Maybe. Who cares?
uint8_t d = rawBits & 7; // Are there any errors?
if(!d)
{
tempSense->status = OK;
// Only when tempterature is valid will it update temp. To get
// Celcius integer, temp bits isolated with & bitmask, shifted
// to right to align LSB (18 for extTemp, 4 for intTemp),
// shifted to right again to get Celsius (extTemp = 0.25C per
// bit >> 2; intTemp = 0.0625 C per bit >> 4)
tempSense->extTemp = rawBits >> 20;
tempSense->intTemp = (rawBits & 0x0000FFF0) >> 8;

// Extend sign bit if negative value is read. In an oven. HA!
if(tempSense->extTemp & 0x0800)
tempSense->extTemp |= 0xF000;
if(tempSense->intTemp & 0x0020)
tempSense->intTemp |= 0xFFC0;
}
else
{
// Set temps to something obviously wrong
tempSense->extTemp = -22222;
tempSense->intTemp = -11111;

// Which error code is it?
switch(d)
{
case1:
tempSense->status = OC_FAULT;
break;
case2:
tempSense->status = SCG_FAULT;
break;
case4:
tempSense->status = SCV_FAULT;
break;
default:
tempSense->status = UNKNOWN;
break;
}
}

// Update the timestamp and let the read loop unblock
tempSense->lastTempTime = msTimer_millis();
returntrue;
}
returnfalse;
}
Designed to only refresh at the defined polling interval, this function leans heavily on the msTimer_hasTimedOut() function. If the timeout has been met, it clocks the SPI bus and reads in 32 bits of data. If the reading is valid and there aren't any error bits set, it parses out the temperature (both internal reference and external thermocouple) to the nearest integer. If there is an error, the temps are set to something obviously erroneous and the appropriate status flag is set.

Status Message Helper Function
constchar *max31855_statusString(uint8_t status)
{
switch(status)
{
case UNKNOWN:
return"UNKNOWN";
case OK:
return"OK!";
case SCV_FAULT:
return"SCV_FAULT";
case SCG_FAULT:
return"SCG_FAULT";
case OC_FAULT:
return"OC_FAULT";
}
return"Err";
}
Based on the status code, return a string to be printed with USART.
Temperature Sensor Printing Function
voidmax31855_print(max31855 *tempSense)
{
// max(int16_t) = "65535" + '\0'
char buffer[6] = {0};

usart_print("Status: ");
usart_println(max31855_statusString(tempSense->status));

usart_print("External Temp: ");
usart_println(itoa(tempSense->extTemp, buffer, 10));

usart_print("Internal Temp: ");
usart_println(itoa(tempSense->intTemp, buffer, 10));
}
Convert the binary temperature value to decimal using the itoa() function and print using the USART.

Putting it All Together

The main.c file is just a small test file that initializes all the other parts through the (device)_setup command, flushes anything in the USART and then goes into an endless loop. In the loop, it fades the TRIAC drive intensity in and out and constantly tries to read the temperature. Since there's a poll interval specified in the max31855_readTempDone() function, it will only update and print status and temperature at that rate.
/*** main.c ***/

#include"globals.h"

intmain(void)
{
// Globally disable interrupts
cli();

// Setup oven, timers, USART, SPI
oven_setup();
msTimer_setup();
usart_setup(BAUD_PRESCALE);

// Something kinda like OOP in C
max31855 *m = max31855_setup();

// Flush USART buffer
usart_flush();

// Clear interrupt flag by reading the interrupt register
// Specify that it's 'unused' so compiler doesn't complain
uint8_t dummy __attribute__((unused)) = SPSR;
dummy = SPDR;

// Turn on global interrupt flag
sei();

// "Hello World" startup message
usart_println("Hot Toaster Action");

// Main program loop
for(;;)
{
// "Fade" duty cycle in and out with single for loop
int i = 0;
int dir = 1;
for (i = 0; i > -1; i = i + dir)
{
// Control power output
oven_setDutyCycle(i);

// Switch direction at peak and pause for 10ms
if (i == 100) dir = -1;
msTimer_delay(10);

// If it's done reading, print the temp and status
if(max31855_readTempDone(m)) max31855_print(m);
}
}

return1;
}
To finally compile and upload the code to the board, we use GNU Make. Make allows you to specify compiler and programmer options with a somewhat cryptic syntax. I borrowed the makefile template from Pat Deegan at electrons.psychogenic.com and modified it to suit my needs. You may need to do the same if your setup differs from mine at all. The main things you should be concerned with are:
# Name of target controller
# ...
MCU=atmega328p
# ID to use with programmer
# ...
PROGRAMMER_MCU=atmega328p
# Name of our project
# ...
PROJECTNAME=iot-reflow-oven
# programmer id
# ...
AVRDUDE_PROGRAMMERID=dragon_isp
# port
# ...
AVRDUDE_PORT=usb
Once everything is to your liking, type make to compile and sudo make writeflash to upload to your board. If everything went according to plan, it should look something like this:


Conclusion

The next step is to get an actual toaster in the mix and start developing feedback controls for it. We're going to get into some control theory in the next article and write some test scripts to characterize the behavior of our system. That way we can create a robust, fast, and reliable controller regardless in the face of small perturbations and varying oven types. Keep hacking away!

 

Give this project a try for yourself! Get the BOM.

Meet WARLORD: Metawave Aims to Bring Millimeter-Wave RADAR Sensors to the Automotive Industry

$
0
0
We've heard all about LiDAR sensors for automotive applications. But what about RADAR? Metawave has developed a RADAR sensor, dubbed WARLORD, that CEO Dr. Maha Achour believes will eventually allow safer Level 4 and Level 5 autonomous vehicles.

LiDAR has been a quickly rising star in the sensing arena. RADAR sensors, however, may stand to give it a run for its money.

AAC's Mark Hughes spoke to Metawave’s Founder and CEO, Dr. Maha Achour, Ph.D. and Metawave's VP of Strategic Alliances Tim Curley to take a look at a how millimeter-wave RADAR sensors may unseat LiDAR as the future of automotive sensing.

The Current State of LiDAR

The first autonomous cars were built with mechanical LiDAR units affixed atop their roofs. These systems generate large datasets called point-clouds that are then passed to a computer for processing (3D SLAM). Inside the computer, advanced algorithms try to determine which objects are cars, people, trees, buildings, signs, etc… By watching the objects move over time, the central computer can determine velocity, bearing, and predict collisions.

Image showing mapping surrounding autonomous vehicle. Image used courtesy of Velodyne LiDAR

While these first LiDAR units were adequate for the early days of Level 1 and Level 2 autonomous driving, they were aesthetically obtrusive atop their parent vehicle, and they had the unforgivable sin of being prohibitively expensive. For LiDAR to enter the mass-marketplace, the per-unit price had to drop substantially—and the easiest way to realize that dream is to remove the rotating array and eliminate any macroscopic moving parts.

For the last several years, multiple companies (Velodyne, Innoviz, Leddartech, etc…) have been working on solid-state LiDAR, especially for automotive applications like the development of autonomous vehicles. The goal of these companies is to provide a 3D point-cloud with no moving parts, or only MEMS-based movement. The technologies of various companies differ, but all of the units are small, have a limited scan area, and have an almost insignificant expense compared to their mechanical predecessors. The low price (several companies claim $100 price points in mass production) allows multiple units to be attached at the corners of a vehicle to provide 360° coverage.
The LiDAR units are not used in isolation—the data they create is fused with other sensor data. Current Level 3 vehicles also might incorporate visual and IR cameras, ultrasonic sensors, RADAR arrays, and a few other technologies that are all centrally fed to a powerful computer such as the NVIDIA JETSON TX1/2. The computer combines the sensor data to better understand the environment surrounding the car. Since each RADAR/LiDAR sensor can generate up to tens of millions of points per second, cars need gigabit transfer networks and computers capable of processing the data in real time.

Unfortunately, most visual detection methods (e.g., visual and IR camera, LiDAR) are adversely affected by weather conditions, dirt, and highway debris. As autonomous cars continue to progress to Level 4 and Level 5, where no driver interaction is required, automobile makers need technology that isn’t flummoxed by a swarm of bees, a mud puddle, or a rainy day.

Active RADAR Antennas

Millimeter-wave RADAR has several advantages over LiDAR. The first, and perhaps most significant advantage is that the RADAR sensors are not affected by weather and cannot easily be obstructed by highway debris. Where a conventional LiDAR unit can become compromised in a heavy rain or can be partially obstructed by bug or other debris impact, RADAR can see right through those obstructions. A camera or LiDAR unit sees a grasshopper as an opaque, obstructive object that can completely obscure its field of view, whereas a RADAR unit sees a 1-2 dB decrease in signal strength, but otherwise is able to function fully. This means that, for example, a child that is hidden from view behind leaves on a tree on a fog-filled day is invisible to cameras and LiDAR, but remains visible in the millimeter-wave spectrum up to ¼-mile away.

Metawave created an electrically steerable RADAR antenna system called WARLORD (W-band Advanced RADAR for Long-Range Object Recognition and Detection).


Exploded view of WARLORD. Image used courtesy of Metawave

Don’t think of the device as an antenna array, with dozens of feedpoints connected to dozens of antennas. It is a single antenna fed by a single transceiver port. Proprietary integrated circuits on the antenna shift the phase of the signal and are able to steer a main lobe up to ±60°, alternatively multiple lobes can be created to simultaneously track multiple targets. Additionally, the proprietary ICs can contribute to lower costs. "We augment our structure with our own IC," says Dr. Achour. "This is where some of the cost initially would be high, but since we're targeting three markets and they're all in the same range between 60 gigahertz to 80 gigahertz, 5G, and at the same time the automotive radar. We know that the volume for 5G is quite high, so that can offset the cost of the IC."
Multiple devices can be mounted at the four corners of the vehicle to provide full 360° coverage, or augmented with other, less expensive sensors (such as ultrasonic) for closer situations. Level 4 and Level 5 autonomous vehicles require no driver interaction, and the burden of responsibility in an accident is shifted to the manufacturer of the car rather than the occupants, so the cost of adding safety features is negligible compared to the cost of a lawsuit.

Most current LiDAR units send the point cloud to the central computer for processing. WARLORD is able to process the data from the antenna and send object detection and classification information to the central computer (the point cloud is still available for customers who wish to process their own data), greatly decreasing the computational complexity. The unit will send back information that describes the speed of the object (using the Doppler effect), where the object is relative to the car (distance, bearing, elevation), as well as what the object is. For example, WARLORD will notify the main computer that a truck is 500 meters directly ahead and traveling at 20 miles per hour away from the car, and a child is in the crosswalk 50 meters ahead and about to cross into the car’s path. This feat of engineering is accomplished by Metawaves in-house team of AI programmers and testers. Since the RADAR is able to detect objects at such great distances, it provides the central computer ample time to track and respond to potential hazards.

The cost of the device is expected to be less than $500 in mass production, in no small part because it is made with readily available metamaterials on a conventional production line: "There is no exotic material or special processing that needs to be done," Dr. Achour says. "And we've manufactured them using conventional production line, and we expect the yield to comply with all the precision and tolerances of these production lines. So expect the yield to be 100% of these structures."

How Does WARLORD Work?

WARLORD has a custom antenna, created with custom materials, controlled by custom integrated circuits.


Active antenna created from adaptive metamaterials. Image used courtesy of Metawave

The signal from a single feedpoint is controlled with the custom ICs to provide an electrically steerable beam pattern.

Beam-pattern from an active RADAR antenna. Image used courtesy of Analog.

This active antenna configuration allows WARLORD to change its beam pattern at-will to create one or many lobes. This allows the system to simultaneously track multiple objects or to focus in on particular objects of interest. Narrow-beams allow for tracking objects with a smaller RADAR cross-section at a greater distance.


Metawave's WARLORD used to track and identify multiple targets. Image used courtesy of Metawave.

Challenges for RADAR in the Autonomous Vehicle Industry

It is impossible to predict the future as technology is still in the early stages of development. But mechanical rotary LiDAR units appear to have been rejected by OEMs en masse. LeddarTech, Innoviz, and Velodyne have mechanical LiDAR units that are currently being integrated into Level 3 autonomous vehicles. The cost of these units will continue to decrease and their performance will improve. However, all modern LiDAR and camera units suffer from the same critical issues—they have limited range and can be obstructed by debris.

By that same token, however, Dr. Achour says that there's only one real challenge remaining for millimeter-wave technology when it comes to hardware limitations:

"When you start doing beam-forming instead of just sending the signal everywhere, you put this digital wave on every single antenna or analog phase shifter. Now you're operating this array as a phase array antenna. The problem with this approach occurs if it's not being designed in concert together, if they are designed independently. As soon as [the matching antennas] start steering the beam, it goes way above 10 db. You have reflection coming from the antenna, and that reflection basically kills your PA, kills your IC, creates this thermal noise. So, these are talking about the limitations, not talking about the signal processing and the delay and the power consumption in doing this expensive digital signal processing."

Another challenge for autonomous vehicles as a whole is the concept of responsibility for the "decisions" that a car makes. Dr. Achour pointed out that multiple RADAR sensors may be "overkill for Level 3 cars" such as a Tesla because there is a driver who is responsible for the safety of car operation. "But when you go to Level 4 and 5," she says, "Well now the safety is the responsibility of the company that operates this fleet of cars. The profit is not just per car sold but is basically per mile driven. So it's a very different business model for both the car OEM and the service provider."
Metawave, however, does not claim to use its AI to make decisions at a vehicle level.

"Artificial intelligence covers a very broad functionality inside the car. So, if this is centralized, that means we have only one central processor that takes raw data and processes the whole thing. I think that the trend is going to be in doing what we call a hybrid or hybrid centralized and decentralized AI algorithm, so AI processing. Now you have each sensor provide some sort of labeling of these objects to the sensor fusion, and the sensor fusion does another layer of AI to decide 'Should I stay on this lane? Should I brake? Should I change lanes? What should I do?' We [at Metawave] don't do the sensor fusion and there are a lot of companies that do. In addition, all the car OEMs also want to own that sensor fusion because, in the end, this is the brain of the car and the company that has the smartest and safest brain is going to be the winner. We don't expect all of the players to survive a level four or level five challenge. Very few."

So if Metawave's AI isn't intended to perform sensor fusion and produce "decisions" to direct a vehicle's actions, what does the AI do?


"What we offer is an AI algorithm that sits only in the RADAR and only is responsible for processing the radar data and provide it with some level of confidence about the object. For example, if I see a truck maybe with 90% probability, I can provide that label to the sensor fusion, let's say at 300 meters. If I see a motorcycle at 300 meters because the cross-sections are smaller, I will provide it maybe with a 50% accuracy. Now, the sensor fusion will take this information and will instruct the LiDAR and the camera to look in the direction of the motorcycle instead of looking everywhere and wasting time just to verify is this really a motorcycle or not. By doing that, we provide the sensor fusion enough time to react before the car hits the motorcycle, and at the same the RADAR doesn't become liable of the final decision because we provide the long-range information."

Metawave also says it offers something unique to allow for better decision-making. "We give [OEMs, etc.] the option to have raw data. Today, none of the RADAR companies provide raw data. They only provide the two-point cloud, which is the range and the Doppler, just because it's a Level 2, Level 3 [application]. But if we provide them with the raw data, they can do whatever they want with it (and we provide them with the post-processed data of course on a different business model). Then, they have a very stronger platform to work with to make sure that the operation of the car is seamlessly maintained in any kind of operating condition, in any type of weather condition, and at the highest safety expectation."

What’s Next? Ambitions to Unseat LiDAR

Current ADAS (advanced driver-assistance systems) require cameras, LiDAR, and other sensor systems—all of which will almost certainly be necessary for Level 4 and Level 5 vehicles. But, Dr. Achour says, this may change in 10 to 15 years, once sensor fusion has further evolved. With sufficiently advanced RADAR sensors, ("with high-resolution imaging capability that is capable of operating in all weather conditions and all environments and also adding the non-line-of-sight detection and tracking, doing the V2V communication"), you may be able to avoid the need for short- and mid-range sensors at all.

"You add more functionality to the RADAR," she says. "You may not need these short-range and mid-range RADAR sensors. So you are eliminating other sensors."


Metawave is still refining their millimeter-wave RADAR technology, as are any other companies that haven’t made their presence known in the market yet. In a few years, when the RADAR-based-technology companies are ready for tier-1 integration, they might very well supplant the solid-state RADAR that is all the rage today.

Introduction to Sinusoidal Signal Processing with Scilab

$
0
0
This article discusses basic signal-processing tasks that can be performed using a free and open source alternative to MATLAB.

Scilab vs. MATLAB

I’ve done quite a bit of work with MATLAB over the years, and it is undoubtedly a powerful tool that can simplify and accelerate a wide variety of engineering tasks. However, developing software of this quality is by no means inexpensive, and I wouldn’t be surprised to learn that the cost of a standard MATLAB license doesn’t fit within the budgetary constraints of numerous entrepreneurs, consultants, startups, and small engineering firms. It turns out, though, that there is an alternative to MATLAB that is completely free called Scilab.

In my experience with Scilab, it is very capable and reasonably user-friendly. Another advantage is that the Scilab interface is similar to the MATLAB interface, so if you have experience with MATLAB (maybe from your days as a student or an employee of a large company), Scilab should feel somewhat familiar.

Working with Digitized Sinusoids

In the world of signal processing, sinusoids are everywhere. This is as true in the digital realm as it is in the analog realm, and consequently it is important to thoroughly understand the nature of a digitized sinusoid.

Both analog and digital sine waves have amplitude, frequency, and phase. For this article, we don’t need to concern ourselves with phase, and amplitude doesn’t really change when you move from analog to digital; frequency, on the other hand, requires some attention. In the analog domain, frequency specifies the number of cycles with respect to time. Units of time are the same always and everywhere—e.g., 100 Hz (= 100 cycles per second) means the same thing in every engineering project. In the digital domain, frequency loses its reference to an unchanging unit of time. Instead, we have individual amplitude values that must be interpreted according to the sample rate. This can lead to confusion for two reasons: 1) many different sample rates are used, and 2) sample-rate information is not contained within the series of amplitude values.

Sine Generation in Scilab

Let’s explore this issue through an example. We’re going to use Scilab to create one cycle of a sine wave that has 100 samples per cycle. This is the first command:

n = 0:99;

We just created an array that begins at 0 and ends at 99. You can look in the “Variable Browser” to confirm that n is a one-dimensional array with a length of 100.



The array n is the digital equivalent of t (i.e., time) in the typical sin(ωt) expression that we use in the analog domain. Next, I’ll enter the following two commands:

y = sin(n);
plot(y)

This is the result:



It looks terrible, I know, and it’s clearly wrong—we wanted one cycle composed of 100 samples. Let’s think about why this happened:
  1. Sine is simply a function that operates on an argument.
  2. The sine function does not generate unique values for every argument. Rather, the values repeat when the argument changes by 2π: sin(0) = sin(2π) = sin(4π) = sin(6π)....
  3. Thus, one cycle of sine values corresponds to a 2π range of argument values.
  4. This means that to generate one cycle of sine values, we need to modify the argument array so that it extends from 0 to 2π.
The command y = sin(n) doesn’t produce the desired waveform because the argument given to the sine function is an array that extends from 0 to 99. The solution is to divide n by the desired number of samples per cycle, which in this case is 100, and also multiply it by 2π:

y = sin(2*%pi*n/100);

You can readily confirm that this will work: if n is zero, the entire argument is zero; if n is 100, the argument is (2π × 100/100) = 2π; and all the numbers in between are scaled accordingly. Thus, we have reduced the argument range to 2π, and we are still producing 100 samples. (Note: I realize that in this case n does not extend to 100, but if it did, we would want the value 100 to be the first sample in the second cycle. In other words, the first cycle is covered by the 100 values from 0 to 99, the second cycle would be covered by the 100 values from 100 to 199, and so forth.) Here is the result:

plot(y)


Refining the Plot

There’s still something not quite right about the plot. We know that sin(0) = 0, but in the plot, the waveform has a value of 0 for a horizontal-axis value of 1. In other words, the waveform is shifted one sample to the right. This occurs because we didn’t specify a list of horizontal-axis values that correspond to the vertical-axis values contained in the array y. If we tell Scilab plot(y), it uses default values for the horizontal axis, and apparently these default values start at one.

The command y = sin(2*%pi*n/100) generates y values that correspond to the numbers in the array n. If we want the plot to maintain this relationship between y and n, we can use the following command:

plot(n, y)


Incorporating Frequency and Sample Rate

When we’re working with real sinusoidal signals, we generally think in terms of signal frequency (in the analog domain) and sampling frequency (in the digital domain). Thus, we need to integrate these parameters into the purely mathematical ideas presented thus far.
Fortunately, this is not difficult. We saw above that the argument for the sine function needs to include a factor of (2π/samples per cycle). We can calculate samples per cycle for real-world signals as follows:



Let’s say we have a system that digitizes a 6 kHz audio signal and a separate 2 kHz audio signal. The sampling frequency is 44.1 kHz, and the ADC fills a 50-sample buffer. The following sequence of Scilab commands can be used to generate values that resemble the data produced by the actual system.

SignalFrequency_1 = 6e3;
SignalFrequency_2 = 2e3;
SamplingFrequency = 44.1e3;
n = 0:49;
Signal_1 = sin(2*%pi*n / (SamplingFrequency/SignalFrequency_1));
Signal_2 = sin(2*%pi*n / (SamplingFrequency/SignalFrequency_2));
plot(n, Signal_1)
plot(n, Signal_2)


Conclusion


This article provided a brief introduction to the characteristics of digitized sinusoids and the techniques used to create these sinusoids in Scilab. I plan to write additional articles on Scilab-based signal processing and analysis; if there’s a specific topic that you think would be interesting, feel free to mention it in the comments.

How Design Kits Simplify IoT’s Last Mile to the Cloud

$
0
0
A sneak peek of two IoT platforms allows developers to save time and cost while they streamline connectivity to the cloud services.

A new crop of the Internet of Things (IoT) development kits is simplifying design work while streamlining the last mile that links embedded systems to the cloud. This article will present two case studies that allow IoT designers to quickly implement their ideas with a combination of modular hardware and software solutions.

PI Development Hardware

First, take UrsaLeo kit from RS Components (RS), which comes with pre-registered access to the Google Cloud. The IoT kit allows developers to configure their own dashboards and charts, so they can set event-based text or e-mail alerts and run Google analytics.

The apps and APIs in the UL-NXP1S2R2 kit help IoT designers manage sensors, run diagnostics, and share information with enterprise software or third-party tools. RS Components is targeting this kit at the IoT sensing designs employed in automotive diagnostics, healthcare, and general data monitoring applications.


The UrsaLeo sensor kit allows developers to collect and analyze data on a dashboard within minutes. Image courtesy of RS Components.

The IoT platform is based on a Silicon Labs Thunderboard™ 2 sensor module which is ready to connect to the Google Cloud services. The module contains sensors for temperature, humidity, UV, ambient light, barometric pressure, indoor air quality, and gas detection. It also features a digital microphone, a 6-axis inertial sensor, and a Hall sensor.

The UrsaLeo kit also features the EFR32™ Mighty Gecko multi-protocol 2.4 GHz radio from Silicon Labs. It supports Thread, ZigBee®, and Bluetooth® Low Energy (BLE) as well as proprietary short-range wireless protocols. The kit also offers a ceramic antenna, four high-brightness LEDs, and a coin cell or external battery pack.

Portable Software Agent

A portable software agent from Ayla Networks is another use case showing how IoT platforms are simplifying connectivity to the cloud services. It allows IoT developers to select any cellular or Wi-Fi module and have it connected to the Ayla IoT cloud without a lengthy certification process.

Generally, for a specific connectivity chip or module, IoT designers have to build software and then have it certified. That inevitably results in time and cost overhead. What Ayla has done here is bypasses this need to generate source code to port software on a specific connectivity module.

A view of how a communication module pre-loaded with a portable software agent facilitates connectivity to the cloud. Image courtesy of Ayla Networks.

So IoT developers can pick any connectivity hardware and use Ayla's portable agent software to connect to the cloud service. The portable agent comprises of source code, reference implementation, a porting guide, and a test suite for both cellular and Wi-Fi solutions. Ayla also recommends development partners to perform porting work for IoT designers that don't have the in-house firmware team.



The development kits explained in this article are a testament of how IoT platforms can play a vital role in quickly adding application enablement capabilities to the connected embedded systems—and how IoT developers can focus on their business priorities instead of getting stuck into the IoT's connectivity labyrinth.


What other IoT kits have caught your eye recently? Let us know in the comments below.

Kalashnikov Enters the EV Market, Highlights “Revolutionary” Inverter

$
0
0
What do automatic rifles and electric vehicles have in common? Kalashnikov wants to be famous for both.

The electric vehicle market was in for a surprise earlier this month at Russia’s Army 2018 expo, when Kalashnikov Group, the iconic maker of firearms and other military grade weapons, introduced a prototype electric supercar called the CV-1 (source in Russian) that company officials touted as a technological breakthrough that would compete directly with Elon Musk’s Tesla.

The CV-1: Kalashnikov's New Electric Vehicle

Kalashnikov said the CV-1, which was modeled on the 1970s Soviet-era Combi, included technology that reportedly would allow acceleration from 0-60mph in six seconds, a range of 217 miles (350 km) per charge, and features a car battery that allows 90 MWh of charging.


The 70s-styled CV-1. Image used courtesy of Kalashnikov media.

In addition, the company boasted that the car included a ‘revolutionary’ inverter technology that allowed 1.2 megawatt hours of energy, despite a compact and lightweight package design.
The surprise announcement created quite a stir throughout the industry. Industry experts, however, say the lack of pricing, detailed specs, and proposed timeline raise a few questions about whether the claims stand up to scrutiny in terms of providing any serious competition with top-of-the-line automakers.

According to the company, the inverter has dimensions of 50x50x100 cm and a mass of 50kg, allowing 1.2 MW of payload. Electric vehicle experts say the limited amount of information on the vehicle specs and the limited capacity of a company like Kalashnikov to go into mass production anytime soon, raises more questions than concerns about raising the bar on the existing market.

An Answer to the Tesla?

Matt DeLorenzo, senior managing editor at Kelley Blue Book, said that with no production facilities or distribution in the U.S., Kalashnikov cannot be considered a serious threat to Tesla. “Also, while the odd retro styling may appeal to some, it's not in the same league as other luxury electric vehicles on the market or coming soon like the Model S and X and the Jaguar I Pace."

DeLorenzo says, in terms of the battery capacity, 90 kWh is mainstream and the 217 miles per charge range is good, but not exceptional. The Bolt and Tesla Model 3 can both go farther per charge.
Regarding the acceleration credited to the inverter, the 0-60mph benchmark ‘pales in comparison’ to Tesla’s Ludicrous mode, he said. In 2016, Tesla announced that its Ludicrous mode could accelerate a vehicle from 0-60mph in 2.5 seconds. At the time, that made the Model S PD100 the third-fastest accelerating production car in the world. Last year, it rose to position two on that list via Ludicrous+ mode, bringing its acceleration to 2.28 seconds.

DeLorenzo added that it's not known whether the CV-1’s ability to recharge can compare to Tesla’s supercharging. “The specifications are rather sparse and, given that this is the first sign of development from this actor, I would quite confidently say that this is not a credible near-term competitor in the EV space,” said Bjorn Nykvist, research fellow at the Stockholm Environment Institute.

Nykvist said the pack range of 90 kWh is likely to become commonplace among automakers soon, so this is not a glimpse too far out into the future. He added that the inverter is not a limiting factor for BEV development. “They can be lighter and more efficient, but capacity is not something that determines the performance of EVs,” he said. He added that critical BEV performance metrics related to the drive train hinge on battery chemistry.




Whether Kalashnikov or other Russian automakers will be able to compete in global markets is up in the air. Also unclear is how ready Russians are to jump on the EV bandwagon. Will the Russian EV market mirror China's electric vehicle boom? Or will it struggle to fully take hold as arguably seen in the US EV market?


What's your take on the emerging EV market? Do you have experience in developing power systems for electric vehicles? Share your experiences in the comments below.

The Dawn of Gallium Oxide? Researchers Announce New Transistor to Boost Electric Vehicle Batteries

$
0
0
Researchers from the University at Buffalo have announced a gallium oxide transistor with EV batteries in their sights.

Researchers from the University at Buffalo recently announced a working microscopic transistor made from up-and-coming semiconductor, gallium oxide.


Needle probes on the terminals of the gallium oxide transistor. Image used courtesy of Ke Zeng via the University at Buffalo

What Is Gallium Oxide?

Gallium oxide (Ga2O3) looms prominently as a new semiconductor material. The primary reason is its high bandgap. A bandgap is a measure of how much energy an orbiting electron must absorb in order to “escape” its atom and move from the constricted valence band to the conduction band, analogous to a space vehicle escaping earth’s orbit. In the conduction band, the “liberated” electrons are free to conduct electricity. That bandgap is 4.8 electron volts for gallium oxide, while silicon’s bandgap is 1.1 electron volts. Other competing semiconductors such as gallium nitride and silicon carbide also have slightly lower bandgaps at 3.3 and 3.4 electrons respectively.
Semiconductors devised of gallium oxide, with its higher bandgap, can handle more power and take up less space than devices fabricated with semiconductors with lower bandgaps.


Gallium oxide's crystalline structure. Image courtesy of Orci [CC BY-SA 3.0]

They can also tolerate higher temperatures, which is a major advantage for the rough and tumble world of automotive engineering. This could help solve what Gregg Jessen, principal electronics engineer at the Air Force Research Laboratory, described in an article published in the American Institute of Physics as one of the greatest problems involved in controlling power with semiconductors: the waste of power within a device and the troublesome heat thereby generated.
Another real issue with silicon-based devices is that practical limits in “scaling up” such devices are quickly reaching the possible limits. Not so with gallium oxide, because of its exceptional electrical field strength. As reported in an article published Applied Physics Letters, Jensen and Masataka Higashiwaki make the case that gallium oxide could allow for FETs “with smaller geometries and aggressive doping profiles that would destroy any other FET material.”

A Gallium Oxide MOSFET

Uttam Singisetti, associate professor Department of Electrical Engineering at the University of Buffalo, along with fellow researchers, have taken advantage of gallium oxide’s properties to develop a MOSFET with a breakdown voltage 1,850 volts, more than doubling the previous best for this technology.

This is significant because the higher bandgap means that such a device can handle more power at the same size and weight than previous devices could.


Screenshot from the University at Buffalo

The device they built is 5 micrometers wide and, according to Singisetti, this relatively large size makes it unsuitable for mobile devices. Rather, it is more suited for higher-power applications such power plants and motorized vehicles of all sorts.
As Singisetti states, “We’ve been boosting the power-handling capabilities of transistors by adding more silicon. Unfortunately, that adds more weight, which decreases the efficiency of these devices.” Further, “Gallium oxide may allow us to reach, and eventually exceed, silicon-based devices while using fewer materials. That could lead to lighter and more fuel-efficient electric vehicles.”

Applications for Gallium Oxide Semiconductors

It’s no secret that silicon is the go-to material today for semiconductor devices. But semiconductors are increasingly being called on to fill new roles and certainly one of the greatest challenges facing engineers today is building components that can handle ever more power—without making bandgap demands of space and weight. This is especially true in the automotive field because, while electrically powered vehicles are still thin on the ground, modern cars and trucks are increasingly being controlled electronically.
Semiconductors, electrical cables, and motors are replacing pumps, fan belts, and hydraulics. Yet, these clean, efficient electrical systems, while not requiring as much power as the troublesome, polluting mechanical systems they replace, will nonetheless require controlled power. The promise of gallium oxide devices is more power, less space, and less weight. Keep an eye on research in this space for more semiconductor advancements.
You can see the breakdown of a gallium oxide device at its 1,850-volt threshold in the video below:



Bug Bounties Aren’t Just for Software

$
0
0
Bug bounties have been a popular topic in the software industry.

Nearly all of the major tech companies offer bug bounties; there's Facebook, Google, Yahoo, Samsung, and Mozilla, just to name a few. These bounties will often range from a ‘Thank you’, to Swag, to thousands of dollars. Bug bounty programs aren't just limited to software companies; many companies that make hardware have followed suit. Here is a list of 5 companies that offer a bounty program towards hardware and software bugs! Keep in mind that most of these programs only reward bugs related to safety and security.

Tesla


Tesla Model S - Find a security flaw, get a reward! Source: Telsa

Tesla is compared to a startup software company more often than a car company, so it shouldn't be surprising that they offer a bug bounty program. Tesla’s bug bounty program covers “hardware products that you own or are authorized to test against (Vehicle, PowerWall, etc.)” in addition to apps, software, and websites. According to BugCrowd, Tesla has given awards for 108 vulnerabilities ranging in value from $100 to $10,000. Think you have what it takes to crack a Tesla? Head over to BugCrowd for more details and the fine print!

 

AT&T & DirecTV


The Direct TV Genie - Find a security flaw, get a reward! Source: DirecTV

AT&T has offered a bug bounty program for quite some time. With their recent acquisition of DirecTV, AT&T is now offering bug bounties for their new subsidiary as well! AT&T is offering rewards up to $5000 for critical security issues. For all of the terms and conditions, head over to AT&T’s bug bounty website.

 

Samsung

Samsung Smart TV - Find a security flaw, get a reward! Source: Samsung

Smart TVs often pack in many extra features like microphones and even cameras in some cases. Samsung offers a bug bounty program from their smart TVs. They offer $1000 or more for critical bugs. For all the legal information and rules about their program, head over to their dedicated website for their smart TV bug bounty program.

 

Blackphone - Secure Smartphone


The Blackphone! Find a security flaw and get a reward! Source: Silent Circle 

The Blackphone is a high-security smartphone made by Silent Circle. The Blackphone gains its security with a special android ROM. Silent Circle offers a bounty program for both the software and hardware involved with the Blackphone. Silent Circle will pay a reward of $128, but that varies with the bug. For all the terms and conditions involved with the Blackphone bounty program, head over to the Bugcrowd Page.

 

Ubiquiti - Network Equipment


Ubiquiti airMAX Bridge - Find a security flaw, get a reward! Source: Ubiquiti

Ubiquiti is a large manufacturer of network equipment and related devices. Ubiquiti offers a bug bounty program for their web applications, and they also offer a bounty program for their network equipment. In particular, this program pertains to their airMAX, UniFi, EdgeMAX, airVision, and airFiber embedded devices. Ubiquiti will pay from $100 to $25,000 for security bugs. According to hackerone.com, Ubiquiti has given out 138 rewards. For more information regarding this bug bounty program, head over to HackerOne.

 

Bounty Program Resources


Bounty programs are a big deal with hundreds of companies offering them! Two great websites that facilitate bounty programs are Bugcrowd and HackerOne. Do you think we missed any great hardware bug bounties? Let us know in the comments below!

Wearables Roundup: Bio-hacking, Ford Motors, and Sensors that Stick to Your Body

$
0
0
See what's new in the world of wearables this week.

Bio-hacking

What started out as a strange and sometimes creepy subculture of cyberpunk enthusiasts is becoming a mainstream industry, and startups are popping up that make implantables. Chips that are implanted under the skin are being featured at BioNyfiken, "the 1st Biomaker Conference in Sweden," taking place at Epicenter in Stockholm. The event, which takes place on April 9th looks like a combination of a wearables and body modification convention.


BioNyfiken's logo

It may be a long time before under-the-skin wearables are seen on a large scale level, but that's one of the reasons that bio-hacking has become so popular in the first place. Amal Graafstra, a prominent bio-hacker in the community expanded on this in an interview with Digimonica:
"If big companies can’t sell a billion of them in the first year, they’re not really interested. So these solutions wouldn’t come any other way, or at least they’d be 20 years out."
In the meantime, the bio-hacking community is happy to develop these devices themselves instead of waiting for something like an "Apple implant" or something to come out. The most common bio-implants are RFID chips in the hand, similar to the ones implanted in animals at shelters. They can be used as ID cards for security, location tracking, or as digital wallets. The smaller chips are about the size of a grain of rice and can be injected into the skin. Larger mods probably won't be going mainstream anytime soon.

Ford's Wearables Lab

Ford Motor Company is researching the integration of wearables into their automotive electronics to improve vehicle safety. They had originally experimented with health sensors in the seats of their vehicles, but there were too many variables to give accurate readings. Seats would have to be in the perfect position to get readings, making retrieving the data near-impossible if somebody adjusted the seat. Certain types of pants would also interfere with the sensors.

Instead of throwing in the towel on this endeavor, Ford decided to integrate health sensors into their vehicles by taking the data collected by wearables and integrating it into their vehicle's computers through BlueTooth. While this is a significant leap forward from the seat sensors, wearable integration into vehicles will have its own set of challenges to overcome. Gary Strumolo from Ford's Research and Advanced Engineering talked about these hurdles in an interview with MedCity News. The two largest hurdles Strumolo brought up were power management and accurately measuring variables like drowsiness:
“Even if you had a camera looking at the driver… what metric do you use to make that determination?”
Ford has their work cut out for them, but if somebody can solve the enigma of reducing the power requirements for constant wireless data transfer from wearables, Ford will be your new best friend!


The BioStamp Research Connect

MC10 released the BioStamp Research Connect earlier this week. The BioStampRC is a multi-purpose, lightweight, flexible medical sensor that sticks to the skin like a bandage. The device is only available to medical professionals, but will allow for patients to collect accurate medical data for hospitals by remote through BlueTooth. The device is waterproof, has a 3-axis accelerometer, a gyroscope, some more unnamed sensors, and a 15mAh battery that lasts for 36 hours between charging. Although the hardware isn't anything new, the application could lead to major breakthroughs in medical data collection.


The BioStampRC, courtesy of MC10, somehow it remains sticky even after being washed

Dr. Alavaro Pascual-Leone was very excited for the doors that the BioStampRC can open in an interview with USA Today's Jennifer Jolly:
"The ability to capture, with research level precision, tailored data outside of a clinic...that’s the Holy Grail"

Moving Forward...


This was a good week for wearables indeed; all of this new data collection will make for some good weeks for the IoT down the road! If you've come across any interesting wearables in the news that you would like to see covered, let us know in the comments!  

Biometric Security Measures can be Hacked Easily, Here’s Why

$
0
0
Biometric databases and photographs allow a hacker to fool a fingerprint scanner without access to your hand or even a print left on an object. Other biometric security measures don't hold up either.
I've long had a healthy dose of paranoia about online security, and with constant reports of hacks on sites and passwords stolen it's beginning to seem like using biometric security measures would be a great idea. Apple has included TouchID in every iPhone from the 5S onwards, a fingerprint scanner which I know many of my friends and colleagues utilize. Microsoft has included a face scanning unlock feature with Windows 10. Many banks and Government departments use face-scanning or retina scans to secure their data or even physical door locks. However, recent research has shown that biometric security measures might all be a huge liability.


Fingerprint security on laptops used to be the toast of the town, now they're a liability

Gefahren von Kameras is a German biometrics researcher who has shown almost every biometric device we think to be secure is actually trivial to break into. I specifically brought attention to fingerprints as he shows several ways to fool fingerprint scanners, and because many people use the iPhone TouchID scanner to secure their smartphones. If you want any real security, however, stick to a password. In this video, Gefahren von Kameras discusses how easy it can be to obtain a fingerprint from a photograph.

Here, he shows his process. And here an iPhone TouchID sensor is fooled with a dummy print using equipment that most electrical engineers could easily access. This is accomplished as shown with a scanner and actual physical print, but it's easy to see the same process could be performed using a photograph of a fingerprint as well.

Perhaps the most frightening thing to realize is that security measures which cost thousands of dollars and are used to secure banks and Government agencies can be fooled with a simple photograph, in many cases even just from a smartphone. The fact of the matter is, if you want to access a colleague's PC, it might be possible with just their profile picture and a color printer. In under a day you can make a dummy print to access their phone by using the process demonstrated in the above video, or even by creating a 3D model using a fingerprint from a photo or collection of photos layered together using 3D printing technology. While it may have been obvious to those who considered that grabbing a glass used by somebody would allow you to copy their prints, it's rather unsettling that a simple, properly lit photograph is all that's needed.


A rubber fingerprint can be used to fool fingerprint scanners. Courtesy of The Verge

It would be one thing if a DSLR was needed, but my own smartphone has a 13MP Camera, which Gefahren von Kameras specifically mentioned as being more than enough to cheat face and retina scanners.

The real question now is, how can you stay secure anymore? The answer is simple: passwords. Especially after the 2014 court case where it was ruled that fingerprints aren't protected by the fifth amendment, but passwords still are. Your best bet is still using safe services which encrypt your data and strong passwords. I'm also a big fan personally of Google and Microsoft both using two-step verification. (Those links will help you activate it.) While it won't protect your smartphone (especially if it's an iPhone), it'll keep a whole lot of your personal data safe by requiring that someone has physical access to your smartphone, and the ability to unlock that phone, to access either account. This is a major step towards better security in my opinion, as it is a way to theoretically ensure that the person entering your password is actually you. I strongly recommend it for anybody like me who allows Chrome to remember passwords and other personal info. If you're truly paranoid, using a VPN to secure web traffic is never a bad option, and most university campuses already do just that. Other than that, you mostly just have to trust in the security of any service that you give a password to.

If you are like most people and cannot remember limitless passwords, only make up totally new ones for services that seem especially sketchy. That way you won't have to worry if that password is stolen as the person with it can't get into anything else of yours. As long as you stay away from using biometric security measures and are smart about making and using passwords, you should be just fine.

Just remember, always keep those passwords to yourself. It's impossible to control what happens to a photograph of your face or hands once it's posted online, but anything that only you know can't be used against you. You can watch Gefahren von Kameras' explain how to break into an iPhone below.



Viewing all 1099 articles
Browse latest View live