Quantcast
Channel: LEKULE
Viewing all 1099 articles
Browse latest View live

Electric Cars Getting Much Needed Power Boost

$
0
0
HV-ModAL plans on vast improvements to the drive train of current electric vehicles.

Electric cars have always been a welcome alternative to fossil fuels, but the practicalities of their implementation have fallen far short of traditional motors. Slow to charge, quick to drain, and notoriously lacking in power, electric cars have not been viewed as much more than inner-city transportation for the yuppie set. The research project "HV-Modal," backed by ten partners from prominent automotive companies including BMW and Daimler AG, intends to give electric cars the power boost and accompanying recognition they deserve.

An example of Germany's eye for design.

Infineon Technologies, a world leader in semiconductors, has been charged with the lead of HV-ModAL, which will thoroughly research electric drive platforms and address power modules for high-power drives up to 250kW and high voltages up to 900 V as well as modular multi-level DC/DC converters and system components for batteries over 600 V. If successful, the new drive trains will give electric cars comparable performance to their gasoline counterparts. Especially compelling is the project's aim to create a flexible system simulation model for different vehicle platforms that is suitable for cars across a wide range of cars produced by various manufacturers.

This means that, while the German Federal Ministry of Education and Research is funding half of the 7.5 million Euros invested in the project, its outcome will not be classified or proprietary, meaning that many manufacturers can utilize the technology. Perhaps Germany understands what the rest of the world is slow to realize: that it's time for electric cars to become the new standard.


SOURCE: Infineon

Engineer Spotlight: Black Box VR’s Rich Reavis Talks Virtual Reality and Gamified Fitness

$
0
0
In this Engineer Spotlight, we’re talking to Rich Reavis, Black Box VR’s Director of Engineering, about VR hardware, system requirements, and the future of VR applications.

In the last few decades, virtual reality (VR) has evolved out of the realm of science fiction and into the mainstream. Its applications range from full immersion using headsets and haptic gloves to augmented reality (AR) and mixed reality (MR) that combine virtual elements with real-world environments.

While VR/AR/MR are growing in popularity across the board, especially in gaming, they've also found a natural home in the realm of fitness. Developers have begun taking advantage of VR hardware's growing accessibility and combining it with the creativity of VR world-building to produce a whole new industry for health and wellness.

While some companies are incorporating stationary bikes and treadmills into virtual reality experiences, Black Box VR is taking it to the next level. In Director of Engineering Rich Reavis’ own words, BBVR is “gamifying fitness,” creating a unique experience where users don headsets, grab their hands-free motion controllers, and prepare for a resistance-training session from another dimension.


Each muscle group is assigned an element, so if a player is fighting a Fire Hawk, a water attack with a squat should do the trick. Image courtesy of Black Box VR.

Winning the Best of CES 2018 Best Startup award and landing spots as a finalist for Best Sports Tech, Best Fitness Product, and People’s Choice, Black Box VR’s gamified workout touts immersive gameplay where the user controls units on their team, as well as their own character, to defeat opposing units, break down enemy gates along their way, and eventually destroy the opposing tower. The game also works elemental attacks, each correlating with a specific family of exercises, that the player can use to optimize their damage—and get a varied workout.

But beneath it all lies intricate hardware that Reavis helped develop during his time at BBVR.
Reavis worked for businesses like AWI and Pharmer Engineering as a product development and business development engineer for several years after graduating from Boise State University with his bachelor’s degree in mechanical engineering. He then joined Black Box VR as an ME/EE in August of 2017 before becoming the company’s director of engineering and hardware systems in December of the same year.


Director of Engineering Rich Reavis has been a member of the Idaho Virtual Reality Council for the past 10 months. Image courtesy of Black Box VR.

All About Circuits had the pleasure of talking with Reavis about BBVR’s latest initiatives and exploring how he was able to develop this technology and integrate it with an industry that often isn’t associated with virtual reality to begin with.

All About Circuits (AAC): Why do you believe VR will thrive in the realm of fitness?
Rich Reavis (RR):We believe adherence is the key to any successful fitness program. As video games have proven to be addictive and sustaining, the goal is to harness those qualities in a physically beneficial outlet. VR has the ability to create high-level immersive environments which people can interact with on a very personal level, whether it is a game-based or experience-based interaction.
With AR/VR, there are no bounds to what someone could experience—whether it be a game-based competition, or experience-based event. Imagine hiking in the Alps or playing basketball against Lebron James.
There will most likely be a mixture of VR and AR to facilitate the most high-fidelity experiences—as the technology improves, so will the offerings from developers in both software and hardware.

With AR/VR, there are no bounds to what someone could experience... Imagine hiking in the Alps or playing basketball against Lebron James.


BBVR's Dynamic Resistance Machine sports a variety of features. All images courtesy of Black Box VR.

AAC: What can you tell me about Black Box's hardware?
RR:I could talk all day about the capabilities of our hardware, but here are the highlights: Our resistance machine is completely customizable to each user, containing the necessary mechatronics and software to adjust itself depending upon the individual's body type and the resistance exercise selected within the game. This means that there is a machine-learning algorithm which crunches a lot of data to ensure the handles and stabilization pad are positioned correctly for your height and reach.
As for some specific details, the cable handles can automatically move from 2 inches above ground level to about 76 inches off the floor, accommodating a wide range of moves seen on traditional cable machines in any gym. There is also a stabilization pad that automatically extends for moves such as chest press or standing row which require a bit more support during the move. Weight can be changed instantaneously within our system, ranging from 10 pounds to as high as 125 pounds per handle, providing adequate resistance for many ability levels.

AAC: What makes your hands-free motion controllers and resistance machines unique in the grand scheme of VR design?
RR:Our hands-free motion controllers are just that—hands-free—and they track your motion within VR. It sounds simple enough, but this is a very important, unique aspect of our system. As far as we are aware, we were the first company to produce such devices, as it is imperative that a user’s hands be free to grab handles and complete a workout.
A major part of our offering is teaching and encouraging proper form. The only way we can determine the user’s biomechanics within the experience is by form-tracking, provided by monitoring points within the headset and the two arm motion controllers. We have plans to develop or integrate controllers for the lower body as we expand our software.
The internal design of our controllers is relatively open-sourced technology, based off the hardware development kit provided by SteamVR. This kit is readily available and provided exclusively by Triad Semiconductor.


BBVR's hands-free motion controllers are the first of their kind. Image courtesy of Black Box VR.

AAC: What challenges or surprises might other engineers look forward to when working with the SteamVR HDK?
RR:It’s always exciting when an open-sourced technology hits the market. So many opportunities exist when talented, creative individuals are given the opportunity to tinker and explore. On the flip side, as a new technology, there are bound to be bugs and highly limiting roadblocks.
Engineers have a great head start with the SteamVR HDK since the design files and component kits are available, allowing those interested to create their first VR-tracked devices in a relatively short time frame. As new firmware and SDKs are released, it should improve the technology as a whole and allow more people/engineers to build upon what’s already available.
The big development that’s next on the horizon is the Steam VR tracking 2.0 and, of course, wireless headsets. With 2.0 tracking, the promise is there will be support for more than two base stations per HMD, effectively increasing the coverage area (meaning fewer sensors required on peripheral devices, less power consumed, and better tracking). Larger coverage areas correlate to larger play areas, allowing users to interact on a much grander scale than they are currently used to. Wireless headsets? Enough said… this will be a game changer for all involved with VR.

AAC: Are your headsets designed by Black Box as well?
RR:No, we utilize headsets available on the open market. This technology is already viable and stable in its current form, so there is no need for us to try to compete in this space, as the HMDs available are functional for our purposes.

AAC: What were some of the main challenges or hurdles the Black Box team faced when designing this hardware?
RR:This could be an extremely lengthy response, interlaced with outbursts of tears and hysterical laughter (i.e., an emotional rollercoaster). As with any new product (especially a highly sophisticated engineering product) there exists unforeseen situations, many of which seem almost trivial in retrospect, but they were catastrophic at the time. My two favorite sayings since joining BBVR and tackling this new frontier are:
  1. “You don’t know what you don’t know,” and
  2. “It’s never a problem, until it is.”
Perhaps not very profound, but all too true. When combining multiple new products and industries, there are bound to be surprises. Just like any other engineering project, there exists the typical design conundrum of balancing performance, safety, cost, reliability, ergonomics, and aesthetics all at the same time. “Tradeoffs” and “compromises” are words we struggle to accept within our company, as we always look to optimize each and every aspect of the product.

When combining multiple new products and industries, there are bound to be surprises.

Just speaking to EEs about our main challenges, there have been several stages with both the machine and motion controllers. For the machine, there was a very stressful period involving signal noise interference between the automation system and VR devices. Many high-frequency devices are interacting with each other within a complex network, which can make it difficult to pinpoint the source(s) of noise. If any of you have dealt with noise issues on any type of project, you can probably relate.
Beyond reevaluating component spec sheets and leaning on manufacturers, the best practice is to start eliminating potential culprits one at a time. This is an important debugging and troubleshooting protocol, though it is not always possible when factoring the project schedule.
A piece of advice: Always allow adequate time for programming and debugging when commissioning a system or product. It will always take longer than you think. Most often, you will not have a lot of time for this piece of the design process. Just make sure to document and archive your findings. Too often, these same or similar issues rear their ugly heads, and you could waste valuable time solving the same problem multiple times. 


BBVR claims that their workouts will "increase muscle, decrease body fat, build strength, increase cardiovascular endurance, and improve overall health and longevity." Image courtesy of Black Box VR.

AAC: Is there anything else related to BBVR's hardware that you think would be especially interesting to our electrical engineering audience?
RR:From an electrical/controls perspective, there are many challenges when developing an advanced embedded system, especially when it is a first-of-its-kind application. For us, the control system within the machine is the critical hardware piece that requires the most attention and optimization as we develop our game. Imagine it as a highly intelligent central nervous system that is implanted into a somewhat traditional mechanical device, producing a fitness robot if you will. Your readers are probably familiar with these types of systems. Now, it just wears a different skin and allows your body to input commands to play a game.
Needless to say, there are several design considerations which require constant monitoring and fine-tuning as aspects of the game and user features change. As a dynamically resistance-based system, there are certain electrical limitations we must work within. This often requires us to think of the future as our product and business evolve, leading to the ultimate question: How do we get this into the homes of consumers?
Sometimes, there are no absolutely right answers, but there are a few absolutely wrong answers. Certain governing electrical properties or constraints are continuously being evaluated and considered as we iterate. The obvious considerations are power supply/consumption, heat generation/dissipation, and force/torque outputs. More subtle design considerations deal with the control programming and communication/command interfaces. Syncing all of this to essentially hide in the background and run flawlessly without anyone knowing it’s there is the ultimate engineering challenge for our highly skilled team.

AAC: As VR becomes more commonplace, how do you see its hardware evolving?
RR:There will most definitely be more peripheral and accessory devices coming to the market soon. We are already seeing innovative concepts like omni-directional treadmills, haptic gloves, and haptic suits. There are many exciting developments as we venture into the integration of AR and wireless capabilities. With more efficient tracking systems, form factors will be optimized for everyday use and will mimic what we see in the movie "Ready Player One” [which features full-body haptic suits and wire rigs].
There are many factors to take into consideration when predicting future VR computing system requirements. One thing I do know is users will demand higher fidelity experiences, which will require higher frame rate speeds. At the same time, tracking reliability and range are in the process of a major upgrade. So, this means there may be more demand on the processors and GPUs, but less on the volume of peripheral sensors, as more coverage ability requires fewer sensors—which translates to lower power consumed and better tracking for those devices.


BBVR aims to have their first full-fledged VR gym in San Francisco, California, up and running in the fall of 2018. Image courtesy of Black Box VR.

AAC: How would you differentiate the design requirements for, say, a traditional gaming computer and the requirements for a VR gaming computer?
RR:GPU performance is crucial, as most high-end VR games and experiences are 90 fps and above currently. As the requirements for faster frame rates are on the rise, this will only become more prevalent. A typical gaming laptop can perform very well at 60 fps, so in the frame rate increase is definitely energy intensive. We are hearing a lot about the progression of VR-ready PCs on the market, better/bigger GPUs, faster clock speeds on the CPUs, better cooling systems, etc. Manufacturers are aiming to optimize these machines for VR.

AAC: What other innovative applications do you see VR being used for as these technologies progress?
RR:The sky's the limit! With a combination of AR/MR/VR technologies, there are endless possibilities within education, entertainment, fitness/sports, manufacturing, design—you name it.
As the technology becomes more commonplace, the more developers and manufacturers will get in the game, bringing in unique talents and viewpoints. Though VR has been around for quite some time, it seems to be ramping up to its always-promised potential. There are still a few major hurdles to clear, but things are looking positive as more investment and opportunities arise.
As an engineer and designer, I am especially excited for the advancements within parametric CAD software. I have always envisioned having the ability to interact with the designs, being able to touch and feel the model as you evaluate in real-scale… so cool!

Thank you so much for your time, Rich!

Check out Black Box VR if you'd like more information on their gamifying initiatives and updates on upcoming releases.

New Library from Microchip Brings Touch and Gesture Functionalities to Familiar Microcontrollers

$
0
0
Touch displays often require specialized touch controllers to drive displays, which can bump up the cost of final products. To mitigate cost increases, Microchip has just released a free touch library for use with most PIC, AVR, and SAM microcontrollers.

For some, it seems like only yesterday that touchscreens started to become integrated into products, taking up the mantle as our primary technology interface from the mouse and keyboard.
The first touch displays that I remember used resistive displays which could be found on the LG Optimus, Sony Ericsson, and even the Nintendo DS. But it was not long before capacitive displays took the market by storm and now the majority of displays are capacitive.

There are many reasons for the dominance of capacitive displays over resistive displays, including multitouch support, better screen contrast, and higher sensitivity. Now that displays can be found in many products and most mobile devices use touch screens, customers are beginning to expect such technology as standard. This expectation is pressuring designers to integrate touch displays into their products,

But a few issues come with this touch revolution. Integrating a touch display usually requires a touch controller which can increase the cost of manufacturing. And with the slowly rising popularity of wearable electronics, space on a PCB is becoming seriously precious.
In response to the rising demand for touch-oriented design resources, Microchip has released their 2D Touch Surface library that brings touch display capabilities to simple microcontrollers including PICs, AVRs, and SAMs.


Image courtesy of Microchip.

Microchip’s Touch Library

The library (which only requires ADC2 and 14KB free program memory) allows the designer to easily create phone-like UI elements without the need for a complex operating system in the background. This means that products such as the PIC16F1559 (costing as low as £0.90 per piece), can be used to create user-friendly interfaces without losing a tiny footprint.
The touch library also has other features, including:
  • Single/dual finger tracking
  • Dual finger surface gestures
  • Water-tolerant 2D touch sensing
  • Noise-robust touch sensing
  • Low-power touch (sleep scan mode consumes as little as 5uA)
  • Simple integration into Microchip projects
  • Library access and GUI based configuration via MCC / START


Gesture Control: The Next Interface Paradigm

But touch control is not the only capability of the Microchip touch library; with further processing, gestures can be detected, as well. This reflects the fact that gesture recognition is poised to become the next form of interaction to work with, and possibly overtake, touch interfaces. It was just about two years ago, remember, that Google's Project Soli introduced us to some novel gesture recognition concepts. These concepts, such as the pinch and zoom gestures, are now standard motions, also included in Microchip's product demonstration.

These gestures can be detected from up to 8 inches away from the display. The image below shows the different gestures that can be detected using the library.

Gestures built into the library. Image courtesy of Microchip

Another thing that the Microchip website features is the ability to match gestures to GPIO pins. While they do not go further into detail about this ability, this could mean one of two things (or both). The first possibility is that when a gesture is detected, it can send a signal to a GPIO pin which can be picked up by an ISR. This would result in lightning fast responses from complex gestures as the gestures are directly linked to the hardware.

The other possibility is that gestures could be used to output signals from GPIOs as soon as they are detected, which could be used to drive another microcontroller or I/O device. Either way, there is no operating system requiring overheads which results in faster responses from touch displays.




User interfaces are changing more rapidly than ever. The release of Microchip's library could allow many more designers, including students, to keep up with these trends and design better systems. If you decide to check out the library, please let us know your experiences in the comments below!

Side-Wettable Flanks Enable AOI on Leadless SMD (DFN) Packages

$
0
0
Quad flat no-lead and discrete (or dual) flat no-lead packages provide a welcome increase in component density on a PCB area by eliminating leads. However, the solder connection quality cannot be tested by automated optical inspection. This article explores solutions to this often costly problem.

Many applications are space-limited, leading to the development of quad flat no-lead (QFN) packages with connection pads only on the underside. This increases component density on a PCB area by eliminating leads. Such packages are used in high volume for discrete semiconductors, termed as discrete (or dual) flat no-leads (DFN). DFN packages are characterized by their small size and low number of I/Os (Figure 1).

DFN example packages: DFN2020MD-6 (left) and DFN1006D-2 (right)
Figure 1. DFN example packages: DFN2020MD-6 (left) and DFN1006D-2 (right)

A huge variety of DFN packages are now available. The internal construction of DFN-packaged devices saves space and also leads to a reduced thermal path (Figure 2). However, QFN/DFN packages suffer a pretty significant disadvantage: the solder connection quality can only be fully inspected by costly x-ray processes, rather than automated optical inspection (AOI), because the solder connection is only underneath the plastic body of the package. The automotive industry, in particular, benefits most from the use of AOI, leading Nexperia to take a long look at solutions to this challenge.

The advantages of leadless packages.
Figure 2. The advantages of leadless packages.

DFN packages are assembled in a manner similar to leaded packages except that a group of several products is molded with epoxy plastic in one shot. All QFN/DFN package lead frames consist of a copper alloy base material. Many of them are plated with a nickel-palladium-gold (NiPdAu) layer stack which is pre-applied by the lead frame supplier, guaranteeing an oxide-free surface for chip attachment, wire bonding and, on the connection pads, for wetting with solder.

Optionally, the NiPdAu layer may be additionally plated with tin. Cutting into individual devices is done after electro-galvanic tin plating. Of course, this makes tin plating of the bottom pads’ side flanks, which are exposed after sawing, impossible. The material of the side flanks of the DFN package pads is a copper alloy (lead frame base material), which may oxidize so a wetting with solder in the reflow soldering process depends on storage conditions and duration and therefore cannot be guaranteed.

Side-Wettable Flanks Guarantee Solder Wetting of Side Pads for Low I/O DFN Packages

To overcome this challenge, a solution has been developed which covers the side flanks with plated tin in the same electro-galvanic plating step as used for the bottom pads. This technique is only applicable for DFN packages with up to four pads (more if multiple pads are fused together) and the pads need to be on opposite sides of the package. Plating on all four of the pad´s side flanks on a DFN package is not possible with this method. Figures 3 and 4 show details of a DFN package with side-wettable flanks (SWF).

Detail view of side-wettable flanks - DFN2020MD-6 package
Figure 3. Detail view of side-wettable flanks in a DFN2020MD-6 package.

Cut away view of DFN2020MD-6 package
Figure 4. A cutaway view of the DFN2020MD-6 package.

The full tin-plated side-wettable flanks guarantees that the complete side pad surface is wetted with solder during the reflow soldering process. An important advantage of this process is that the plating layer on the side flank is as thick as on the bottom pads — around 10µm. This guarantees a wettable surface even after long periods of storage. Examples of the optical appearance of side flanks after soldering are shown in Figure 5 for a DFN2020-6 package with and without side-wettable flanks and in Figure 6 for the two-pad DFN1608-2 package.

AOI example comparison of a DFN2020-6 package with SWF versus bare copper side flanks after soldering.
Figure 5. AOI example comparison of a DFN2020-6 package with SWF versus bare copper side flanks after soldering.

Appearance of side-wettable flanks (DFN1608) after soldering
Figure 6. The appearance of side-wettable flanks (DFN1608) after soldering.

The height of the side-wettable flanks of a DFN package plated with this method depends on the lead frame thickness, but it meets the requirement of a minimum height of 100µm as raised by some automotive customers.

AOI Capability and Proof

The main purpose of the side-wettable flanks is to facilitate a reliable AOI capability for DFN packages. Thus, costly x-ray inspection can be skipped.

AOI enabled DFN package with Side-Wettable Flanks
Figure 7. AOI-enabled DFN package with side-wettable flanks.

One important condition to consider is that the PCB solder pad size must be extended to be larger than the package dimension to allow space for the solder to build a meniscus or filet. The solder footprint recommendations of suppliers that offer packages with side-wettable flanks include this extra space.

To examine the suitability of Nexperia’s side-wettable flanks for AOI inspection, multiple test boards were built with solder footprints modified to accommodate the SWF package. The printed solder paste volume has been modified deliberately — on some PCB solder pads no solder was printed (see Figure 8). Working with a leading AOI equipment vendor confirmed that standard AOI techniques are able to reliably identify soldering failures using the DFN packages with SWF after reflow soldering.

Example of solder failures on test board
Figure 8. Example of solder failures on test board.

Additional Benefits of Low Pin Count DFN Packages with Side-Wettable Flanks

An additional benefit of DFN packages with side-wettable flanks is that the mechanical robustness of the bond to the PCB is improved when compared to devices without side-wettable flanks.

Board level robustness improvements of DFN packages with side-wettable flanks.
Figure 9. Board level robustness improvements of DFN packages with side-wettable flanks.

As shown in Figure 9, the shear force required to dislocate the package off the PCB is increased due to the meniscus formed after soldering. Shear force data has been collected for a DFN2020-6 package with and without side-wettable flanks. Overall, 80 samples each had been sheared on the PCB after soldering. The results show that the shear force improved by about 10% with side-wettable flanks and the standard deviation also improved (see Figure 10).

Shear Test on PCB for a DFN2020-6 package with and without side-wettable flanks.
Figure 10. Shear Test on PCB for a DFN2020-6 package with and without side-wettable flanks. 

Board bending tests also confirmed an increased robustness for DFN devices with side-wettable flanks — a result of additional features at the package solder pads which achieve a better anchoring to the plastic body. Summarizing the data, proves that board-bending depth for the DFN1006-2 package with SWF is up to 14mm. Some passive chip components of the same size have a bending depth often specified at 1mm.

SWF Solutions for DFN Packages with More than Six I/Os

For DFN/QFN packages with multiple I/Os (above 6) and lead-frame thicknesses of 200µm and above, one alternative is to use dimples on the side pads. Dimples are pre-etched and NiPdAu plated together with the bottom pads by the lead-frame supplier. Device separation is done at a point between two adjacent packages in the middle of the etched dimples. The wettable feature size formed by the dimples is smaller than that of the galvanic tin-plating solution previously described. Usually, packages with side-wettable flanks made in this way are delivered with NiPdAu pad plating, i.e., without additional tin-plating on the pads. Figure 11 gives an example of such a package without (left), and with (right) the dimple feature.

Example of multi I/O DFN/QFN package with dimples to achieve wettable flanks
Figure 11. An example of multi I/O DFN/QFN package with dimples to achieve wettable flanks.

Another alternative is partial separation of the DFN packages after the molding but prior to tin plating, also known as the “saw plate saw” method. Sawing is performed to a depth that partially exposes the side flank. This means that the pads are still connected by the remaining metal part of the pad flanks ensuring that the continuity of the lead-frame is maintained for the galvanized plating process. Full device separation, with a thinner saw blade, is done after tin plating. Due to the necessary sawing tolerances, this method is — like the dimples alternative — only suitable for lead-frames greater than 200µm thick. Note that the complete height of the side flank is not covered with tin.

Electroless Tin Plating as an Alternative for Multi-I/O Packages

A technique is under investigation that would apply an electroless (immersion) tin plating process to realize the side-wettable flanks. This would allow plating of multiple pads, which could be arranged on all four sides of the DFN/QFN packages. The individual DFN/QFN package can now be fully separated prior to plating.

Unlike barrel plating, the immersion process in which the packages are fixed on a carrier can achieve good layer thickness conformity. However, a disadvantage is that the growth rate of the tin is slow and the achievable tin layer thickness is less than 3µm.  However, plating chemistry suppliers are starting to offer new immersion tin-plating systems that address this issue.

Nexperia offers leadless packages with the side-wettable flank option across its full standard product portfolio, including Logic and ESD protection devices, MOSFETs, diodes, and bipolar transistors. Today, ten package versions are available, and the portfolio is growing.  This video shows how Nexperia’s leadless packages are meeting the requirements of the automotive industry. By featuring side-wettable flanks, they allow a visible solder joint to develop, enabling automatic optical inspection. At the same time, the packages help save space in vehicles with increasing semiconductor content due to more electronic functions while maintaining the high safety and reliability standards needed in automotive applications. For more information visit Nexperia's website.

Stepper Motors and Their Principles of Operation

$
0
0
A stepper motor is a type of DC motor which has a full rotation divided in an equal number of steps.

 It is a type of actuator highly compatible with numerical control means, as it is essentially an electromechanical converter of digital impulses into proportional movement of its shaft, providing precise speed, position and direction control in an open-loop fashion, without requiring encoders, end-of-line switches or other types of sensors as conventional electric motors require.
4-wire Bipolar Stepper Motor
4-Wire Bipolar Stepper Motor

The steps of a stepper motor represent discrete angular movements, that take place in a successive fashion and are equal in displacement, when functioning correctly the number of steps performed must be equal to the control impulses applied to the phases of the motor. The final position of the rotor is given by the total angular displacement resulting from the number of steps performed. This position is kept until a new impulse, or sequence of impulses, is applied. These properties make the stepper motor an excellent execution element of open-loop control systems. A stepper motor does not lose steps, i.e. no slippage occurs, it remains synchronous to control impulses even from standstill or when braked, thanks to this characteristic a stepper motor can be started, stopped or reversed in a sudden fashion without losing steps throughout its operation.

Speed of a stepper motor can be controlled in a broad range of values by altering the frequency of input impulses. For example if the angular displacement per step is 1,8 degrees, the number of total impulses required for a complete revolution is 200, so for an input frequency of 400 impulses per second the speed of the motor is 120 rpm. Stepper motors can operate with input frequencies up to 2000 impulses (steps) per second, with step values from 0,3 to 180 degrees.


Stepper motors have power ratings ranging from the Microwatt domain to not exceeding a few Kilowatts, thus being preferred in low to medium power applications, where precision high-speed movement is required, rather than in heavy duty applications where torque is a key factor. These motors employed in plotters, disc drives, printers, robotic arms, CNC machines and others of the type.

Key features and shortcomings

Stepper motors have numerous advantages:
  • They ensure univocal conversion of control impulses to displacement and can be employed in open-loop control applications;
  • Have a wide range of control frequencies;
  • They provide precision and high resolution for positioning;
  • Allow for sudden starting, stopping or reversing without losing steps;
  • Can hold their position;
  • Are highly compatible with numerical control.
But they also have disadvantages like:
  • Fixed step value (angular displacement) for a given motor;
  • Relatively low speed;
  • Low torque;
  • Low power efficiency.
The characteristics of a stepper motor are strongly dependent to load and type of the actuation mechanism it is employed in, so that:
  • A certain resolution for the complete actuation system is imposed;
  • Loads, forces or inertia must be reduced at the motor’s shaft;
  • A certain speed characteristic must be defined for accomplishing movement;
  • The ratio between loads reflected to the motor’s shaft and the actual torque of the motor must be kept in adequate limits.

Stepper motor types and operation

There are various types of stepper motors, divided into linear or rotational constructions, with 1 to 5 control windings.
Based on the construction of the magnetic circuit there are three main types of motors:
  • Variable reluctance – reactive type;
  • Permanent magnet – active type;
  • Hybrid.
Variable reluctance (VR) stepper motors have uniformly distributed teeth, made of iron, on both the stator and the rotor, control windings being mounted on the stator’s teeth, while the rotor is passive. By energizing one or more phases, the rotor will turn in such manner that the magnetic field lines should follow a minimum reluctance path, i.e. the rotor’s teeth must align themselves either with the teeth on the stator, or with the bisectrix of the stator’s electromagnetic poles.

This type of construction allows for achieving small to medium step angles and operation at high control frequencies. However a motor of this type cannot hold its position, i.e. has no holding torque, when no current flows through the stator windings.

That the flow of the current through the windings of a VR motor must not be reversed to change the direction of rotation, this is achieved through the impulse sequence. This type of control, where the current flow must not be reversed is called unipolar.

Permanent magnet (PM) stepper motors have a different construction, here the teeth on the rotor are made of permanent magnet material with poles set up in a radial fashion, the stator construction being similar. When the stator windings are energized, magnetic fields that are generated interact with the PM’s flux, generating torque to move the rotor.

Control sequences are similar to VR motors however when for instance the south pole of a PM approaches an electromagnetic south pole on the stator, the current flow through that respective winding must be reversed, in order to generate an electromagnetic north pole for the purpose of maintaining the direction of the forces. So, the phases are energized by alternating polarity impulses, this type of control being called bipolar.

This type of motor can provide higher torque and also has the property of holding torque, when the windings are not energized. Steps are large, 45 to 120 degrees, because the number of permanent magnets that can be mounted on the rotor is much smaller than the number of teeth found on the stator of a VR motor.

Hybrid stepper motors represent a combination of the other two types, and are the most common type of stepper motors employed. In a hybrid stepper, the rotor is made from a permanent magnet, mounted length-wise, with two ferromagnetic toothed crowns, mounted at both ends of the magnet, so that the teeth of one crown are north poles and the ones on the other crown are south poles.

Specific stepper motor parameters

  1. Step angle – represents angular displacement of the rotor for one control impulse;
  2. Maximum no load start frequency – represents the maximum control impulse frequency at which the unloaded motor can start, stop or reverse without losing steps;
  3. Limit start frequency – represents the maximum impulse frequency at which the motor can start without losing steps, when a given moment of inertia and torque load are presented at the shaft;
  4. Pull-in torque – represents maximum torque load at the shaft, at which the motor can start without losing steps;
  5. Maximum no load frequency – represents the maximum impulse frequency that the motor can follow without losing synchronization;
  6. Maximum frequency – maximum frequency of impulses at which a motor keeps its timing for given torque load and inertia;
  7. Pull-out torque – maximum torque that can be maintained by the motor at a certain speed, without losing steps;
  8. Angular speed – is calculated as a product between the stepping angle and the control frequency;
  9. Detent torque – represents the value of the holding torque presented by at the motor shaft when it is not electrically energized.

Also read about how to correctly implement different types of control sequences in our dedicated article about stepper motor control.

Weekend Stories: Autonomous Drifting, Robot Builds Robots, and more Car Concepts

$
0
0


Immortus Solar Powered 3D Printed Sports Car Concept

RC Car Autonomous Drifting

The first story of today is perhaps the most interesting thing I’ve seen lately. Researchers at MIT’s AeroAstro laboratory have presented an implementation of their new learning algorithm for optimizing control policies based on reinforcement learning effectively obtaining a robotic Ken Block. The starting point for the reinforcement learning algorithm is represented by a set of determined optimal control policies for drifting a remote control car. A set of simulation runs is performed after which the control model optimized by the algorithm is transferred to the physical car. In the demonstration below steady drifting is achieved quite rapidly and we can see how efficiently the algorithm compensates for external factors. Of course the end of the video is just as interesting.



via Automaton

Robot Builds Robots

…well not exactly robots but rather locomotion agents as they are called in a very interesting study published by researchers at Cambridge University and ETH Zurich, led by Dr. Fumiya Iida. The research was focused on artificial evolution driven by development and evaluation of characteristics of physical models (phenotypes) with respect to a target function. The experiment involved an autonomous mother robot that assembled children robots and observed how far they traveled before power ran out. This information was taken into account by the evolutionary algorithm which optimized the assembly process. Over the course of 10 generations the research team reported over 40 percent increase in distance traveled by the locomotion agents.


Source BBC News, via SimpleBotics

Android Erica Is Relatively Cute and Friendly

Professor Hiroshi Ishiguro does what he knows best – androids. Erica is a research and development platform that is capable of speech and gesture recognition and voice synthesis. The interesting fact is that it… or she does not look too creepy, in fact she seems pretty friendly and can also perform small movements of eyes, eyelids and head thus adding to the “lifelike” factor.


via Automaton

Futuristic Looking 3D Printed RC Car

Grad student Jakub Ratajczak designed and built a very nice looking RC car with 3D printed bodywork.


via 3D Print

Apparently Apple Is Working on a Self-Driving Car

Reports say that in May this year representatives from Apple’s Special Project group met with officials at GoMentum Station high security autonomous vehicle testing facility near San Francisco.

Solar Powered 3D Printed Sports Car Concept Unveiled by EVX Ventures

Australian startup EVX Ventures unveiled a sports car concept called Immortus which employs solar panels and 3D printed nodes for its chassis, maybe something similar to Divergent Microfactories’ system mentioned a while back.

Immortus Solar Powered 3D Printed Sports Car Concept
Immortus Solar Powered 3D Printed Sports Car Concept | Photo: EVX Ventures

Car Manufacturers Working to Increase Computer Security on Vehicles

Or at least working to implement some of it, we could say. In a way it is understandable that cars are still pretty vulnerable to attacks since the automotive field has come up only recently to hackers’ attention.

Chinese Factory Replaces 600 Humans with 60 Robots

According to Chinese Communist Party newspaper People’s Daily 600 line workers were replaced with 60 robots resulting in 5 times error reduction and over 250 percent production increase.

How Amarino Works – Controlling a Robot with an Android Smartphone

$
0
0
Methods to control a robot increased in number along with the development of mobile devices like smartphones or tablets. Standard methods use a joystick, a remote control or mainly a computer.

 High processing power of mobile devices with Android opened up new ways through which a robot can be controlled. Between such a device and a robot, various programs or components exist and are designed to create a link between them. Amarino is a toolkit used to open communication channels via Bluetooth between a mobile device and an Arduino microcontroller. Data messages are sent in both directions between for instance a phone and the Arduino MCU. In brief, this application makes the connection between Android and the Arduino libraries.
Amarino_control app - sensor graph

Before starting any project – Installing Amarino

The Amarino control application for Android can be downloaded for free. The application was created to have quick access to connections, to monitor input and output data and to create data sets which will be sent to the Arduino. These all can be made in an easy way through the available graphical interface. Access to sensors available on the mobile platform is done through Amarino and, according to project requirements, various information can be accessed, like battery level, time tick, test event or if you receive SMS messages.


For example, if the project is controlling a car with the phone’s compass sensor, the first thing to do is selecting the sensor from the graphical interface of the application, the following step is to establish a connection between the mobile device and the microcontroller. The next step, after the connection has been established, is to move the phone to send compass events to the microcontroller. Using a communication protocol is necessary to ensure safety and to avoid communication errors. The application uses events similar to communication media. In this case a send-receive event synchronization is performed between the phone and the Arduino MCU.

Amarino can work in parallel with additional devices, which once added will remain in the phone memory. The red or green indicator represents the connection status, where green is for device connected, red for disconnected. Connecting or disconnecting a device can be done by pressing the Connect/Disconnect button. Options available for each connection can be selected if the user long presses the Connection button. In the screen the user has access to Connect/Disconnect, Show Events and Remove Device actions.

The interface can be closed without affecting current connections, these will run in background and inform the user through notifications. The presence of an icon in the status bar warns the user that the application runs in the background. If you want to close the process, closing of connections will lead to the closure of any process that is opened by the toolkit.
Amarino control app - addEvent

Amarino Events

  • Compass Sensor– measured in degrees, values between 0 and 359;
  • Accelerometer Sensor– measure unit is m/s^2 and sends data to x, y and z axes;
  • Orientation Sensor– measured in degrees and divided into azimuth, pitch and roll;
  • Magnetic Field Sensor– measure unit is micro‐Tesla (uT) on x, y and z axes;
  • Phone State– sends a message to the robot if phone status is changing to IDLE, RINGING or OFFHOOK.
  • Light Sensor– measured in lux, any change in light intensity will generate an event;
  • Proximity Sensor– measured in centimeters, messages with distance value are sent;
  • Battery Level– any change in battery level is detected;
  • Time Tick– send messages with current time;
  • Test Event– is a function used to test the connection by sending messages every 3 seconds (messages with random data);
  • Receive SMS– you can communicate with the robot via SMS messages in limit of 30 characters.

Amarino is an application which can be used by anyone who wishes to control a robot, has some programming knowledge and, of course, is passionate about this.

Most Advanced Robotics Simulation Software Overview

$
0
0
Creating a complete virtual model of a robot or system by simulating components and control programs can significantly impact the general efficiency of a project. Depending on the level of detail and accuracy of the simulation environment there are various areas which can be analyzed, all of which affect the development life cycle to a certain extent and of course cost.

Benefits of simulation

  • Reduce costs involved in robot production;
  • Diagnose source code that controls a particular resource or a mix of resources;
  • Simulate various alternatives without involving physical costs;
  • Robot or components can be tested before implementation;
  • Simulation can be done in stages, beneficial for complex projects;
  • Demonstration of a system to determine if is viable or not;
  • Compatibility with a wide range of programming languages;
  • Shorter delivery times.

Disadvantages of simulation

  • An application can simulate just what it is programmed to simulate – it will not simulate internal or external factors which are overlooked in the development phase;
  • A robot can encounter many more scenarios in the real world than there can be simulated.
New versions of simulation software platforms offer increasingly more features that make simulation easier and also very close to real life. Most simulation tools are compatible with programming languages like C/C++, Perl, Python, Java, LabVIEW, URBI or MATLAB, however they offer broadly varied feature sets depending on their purpose or focus areas. Take a look at the selection below to find the one which best suits your requirements.

Virtual Robotics Toolkit

Virtual Robotics Toolkit
Virtual Robotics Toolkit from Cogmation Robotics is a simulator for LEGO Mindstorms or VEX robots, depending on the chosen version. The product is focused on STEM education and is also useful for teams who want to prepare for robotics competitions. It supports importing 3D models from LEGO Digital Designer or other similar tools while programming the virtual intelligent brick takes place as in real life. The software runs on Windows and is available either as a single seat, team or class license. In the Mindstorms edition review there is more about this product.

Visual Components

Visual Components Production Line Simulation
Visual Components comes from Finland and is an advanced design and simulation suite for production lines. Entire manufacturing processes can be simulated and analyzed, including robotics equipment, material flow, human operator actions and more. The flagship product of the series 3DAutomate even supports entire factory simulations. Other features include offline programming, open APIs, and extensive component library with over 1800 3D models of industrial robots, machinery, facilities, tools and other hardware found in a factory, in my review you can read more about this.

RoboDK

Industrial Robot Machining Simulation
RoboDK is an offline programming tool for industrial robots which allows for scripting using Python or creating programs visually thanks to its integrated 3D simulation environment. All programs are automatically converted into robot specific languages before uploading them to physical robots. The software library offers 3D models for over 200 industrial robots and tools from ABB, KUKA, Yaskawa to mention just a few of them.

RoboDK provides numerous development features – it can generate alerts when robot singularities or possible collisions are detected, it represents graphically the robot work space, and also allows the user to have an overview of the whole technological and program accordingly. Head to my RoboDK review to find out more.

Robot Virtual Worlds



Robot Virtual Worlds is an advanced simulation software built around the powerful ROBOTC IDE. Users can program virtual LEGO Mindstorms NXT, EV3, VEX or TETRIX robots, either by using ROBOTC or visually via the Graphical Natural Language library extension, and observe their behavior in the 3D simulation environment which accurately renders these robots and their interactions. RVW was primarily designed as an educational tool however it is well suited for all levels of expertise – beginners can learn how to program these robots, teachers and students can use it for home or lab work, while advanced users can refine code or detect errors in their programming.
Several software extensions complement the feature set even further. For instance the Virtual Brick Emulator offers users a similar experience to that of programming an actual LEGO Mindstorms brick with NXT-G or LabVIEW. There are also extensions for creating custom levels, importing 3D models or measuring distances and trajectory angles around virtual environments.

RVW runs on Windows and is available in platform specific releasese. Free trial versions are available for download and licensing starts at US $49. There are also several apps available for iPad in which users can program VEX robots or play games with simulated robot behavior based on user programming.

Microsoft Robotics Developer Studio

mrds
Microsoft offers robot developers a complete tool that can be used to program and create 3D simulations of your robot and environment. MRDS 4 supports major robotic platforms like LEGO Mindstorms, VEX or various hardware such as the HiTechnic sensors and many more. The software offers various methods and technologies for rapid prototyping and includes a great amount of functional libraries.


Unfortunately as of September 22nd, 2014 Microsoft has suspended its Robotics research division, leaving MRDS 4 as the last released version of the software. Naturally this means that support is fairly limited and found on online communities.

LabVIEW

LabVIEW
Developed by National Instruments LabVIEW is a cross-platform design and development environment built around the namesake graphical programming language. The first version of the product was released in 1986 and currently extensively used in education, engineering and research environments.
This is a complex ecosystem well suited for control, simulation, automation, data acquisition, analysis, measurement and many other purposes. Large model libraries are available for simulating a vast array of hardware components and interfacing with most standard interfaces in use today is very well supported. LabVIEW is a proprietary product however there are countless open source extensions for easy integration with other systems and software.

V-REP

V-REP
V-REP is a 3D simulator compatible with Windows, Mac and Linux and is available either with a free educational educational license or with a paid license for commercial purposes.
The software allows modeling of an entire system or only certain components like sensors, mechanisms, gearing and so on. The control program of a component can be attached to the object or the scene with objects to model in a way similar to reality. The platform can be used to control the hardware part, develop algorithms, create factory automation simulations, or for educational demonstrations.

Webots

Webots
Webots has been created by Swiss company Cyberbotics. It has a friendly interface and supports languages like C/C++, Java, Python, URBI, MATLAB and can interface with third party software through TCP/IP. It is one of the most common simulation platforms with a long list of components which can be used in simulation and the possibility of adding other components. The software is cross-platform and trial versions are available.

RobotStudio

ABB RobotStudio
RobotStudio is a powerful development suite created by ABB and is focused on industrial robot simulation and offline programming. The product also offers a generous list of components which can be used to simulate a robot or its sensors, actuators, grippers and more. A free version with limited functionality is available for download.

Gazebo



Gazebo can simulate complex systems and a variety of sensor components. It is used especially in developing robots used in interaction, to lift or grab objects, to push, or any other activity which requires recognition and localization in space. It is an open source software platform for which anyone can develop a plug-in with model components, also compatible with ROS and Player. Gazebo is open source and runs on Linux, ported versions for Mac and Windows are also available.

Actin Simulation



Actin Simulation has been created by Energid Technologies, an American company focused on developing integrated control solutions for robotics systems used in a broad range of industries such as aerospace, medical, transportation, manufacturing and many more. The software is part of the Actin control and simulation suite which can greatly reduce the time and cost associated with the development life cycle of projects employing robotics equipment, as well as optimize existing processes and workflows regardless of the level of customization. A wide range of mainstream industrial robots are supported by default however custom robots and configurations can be modeled, simulated and analyzed to virtually any extent.

Workspace

Workspace
Workspace is a 3D simulation environment supporting a long list of languages used by industrial robot manufacturers such as ABB G-Code, ABB Rapid, Adept V-Plus, Fanuc Karel 5, Fanuc TP, Mitsubishi PA10, Mitsubishi Melfa Basic, Motoman Inform II, Kawasaki AS, Kuka KRL, Nachi Slim, Panasonic Pres and Siemens G-Code. Components and fixtures are included and can be used in building the simulation environment and robot.

Another important feature is compatibility with CAD files that can be created in other programs such as AutoCAD and imported for use in simulations. It is available either as an educational or commercial version and runs under Windows. Demo versions need to be requested from the developer.

Algodoo

Algodoo
Algodoo is a free 2D simulation platform for educational purposes created by Swedish company Algoryx Simulation. It is used similarly as a drawing tool and is available for Windows, Mac and as a mobile app for the iPad.

EZPhysics

EZPhysics
EZPhysics is a free open source software for Windows which allows for 3D simulation and animation in a similar way to video games. A set of examples, complete with accessible source code is included. Remote network interactions with the software are possible, and integration with MATLAB is also possible.

RoboLogix

RoboLogix
With a friendly interface, RoboLogix is an advanced 3D simulation environment for industrial robots,it is designed primarily as an educational tool however it can also serve well the purposes of engineers and robot designers. Some of the features include testing and editing programs used to control robots, and the possibility to optimize the cycle times by comparing the control programs. Free evaluation versions are available.

WorkcellSimulator

WorkcellSimulator
WorkcellSimulator comes from Italy and can be used to simulate and program industrial robots. It is mainly used for applications which involve handling, sorting or machinery for laser cutting and similar applications.

Roboguide

Roboguide
Roboguide is a software suite developed by FANUC Robotics consisting of four components, each of them having a role in simulation. HandlingPRO allows 3D simulation, PaintPRO is used to create links between robot components and programs, PalletPRO and PalletTool used for development and integration of robotic palletizing and depalletizing systems, and WeldPRO allows environment simulation where the robot operates. More information can be requested from the manufacturer.


It seems that the software products presented below have not been maintained in a very long time or have been integrated into other products. They is worth mentioning because most of them are still fully functional, some being based on pretty solid physics engines and can still serve well as educational tools.

OpenHRP3 is a complex environment based on a very realistic physics engine for dynamics simulation. Unfortunately there have not been any updates or maintenance for several years so it will probably remain in this development stage.

SimRobot is developed by Bremen University and used for research of autonomous robots. Current version is compatible with Windows, Linux and Mac OS X.

Simbad is a Java-compatible simulation tool that can be used for educational or scientific purposes. Simbad is mainly used for simulating 3D visualization and sensing, range or contact sensors.
Player is mainly used to simulate sensor applications. Compatible with most operating systems and programming languages, the platform can simulate a variety of sensors and their response to various stimuli. It also offers the possibility to create 3D simulations.

robotSim:Edu was part of the STEM suite created by Cogmation Robotics. It is no longer maintained as a standalone product being succeeded by Virtual Robotics Toolkit, mentioned earlier in the article.


RoboWorks 3.0 is an excellent 3D modeler for use in educational purposes and in industrial simulation. Graphics in 3D can be added in an easy way and it is compatible with C, C++, C/C++ interpreter Ch, VB, VB.NET, LabVIEW etc. It is available as a free demo.

Circle the Wagons: Choosing the Right Protection ICs for Your Smart Load

$
0
0
This article reviews the fundamental features of an effective protection scheme, highlights the shortcoming of a typical protection implementation—such as a high bill of materials and PC board size occupation—and introduces a new family of integrated, highly flexible protection ICs that address these concerns.

Protection circuits are the unsung heroes of modern electronics. The long electric chain, from the AC line to the digital load, no matter the application, is interspersed with fuses and transient voltage suppressors of all sizes and shapes. Along the electrical path, electrical stressors—such as inrush currents due to storage capacitors, reverse currents due to power outages, overvoltages and undervoltages induced by inductive loads switching or lightning—can damage precious electronic loads. This is true for microprocessors and memories, built with fragile sub-micron, low voltage technologies. Like the pioneers of the old west circling the wagons, it is necessary to build a perimeter of protection around the load to handle these potentially catastrophic events (Figure 1).

unprotected CPU on fire
Figure 1. Unprotected CPU on fire.

Typical System Protection

Figure 2 shows a typical system protection scheme around the smart load, for example, a microprocessor. A DC-DC converter — complete with control (IC2), synchronous rectification MOSFETs (T3, T4) and associated intrinsic diodes (D3, D4), and input and output filter capacitors (CIN, COUT) — powers the microprocessor. A voltage surge that comes from the 48V power bus (VBUS), if directly connected to VIN, would have catastrophic consequences for the DC-DC converter and its load. For this reason, front-end electronic protection is necessary. Here the protection is implemented with a controller (IC1) driving two discrete MOSFETs T1 and T2.

Typical Electronic System and Protection
Figure 2. Typical Electronic System and Protection

The protection electronics must be able to handle fault conditions like overvoltage/undervoltage, overcurrent and reverse current flow within the limits of its voltage and current rating. If the expected voltage surge exceeds the protection electronics rating discussed here, additional layers of protection can be added, in the form of filters and transient voltage suppression (TVS) devices.

Overvoltage Protection

If the DC-DC converter maximum operating voltage is 60V, the protector IC will consist essentially of a MOSFET switch (T2) that is close within this operating range and open above it. The associated intrinsic diode D2 is reverse-biased in case of overvoltage and does not play any role. The presence of T1/D1 is also inconsequential in this case, with T1 full ‘on’.

Overcurrent Protection

Even when the incoming voltage is confined within the allowed operating range, problems can persist. Upward voltage fluctuations generate high CdV/dt inrush currents that can blow a fuse or overheat the system, reducing its reliability. Accordingly, the protection IC must be equipped with a current limiting mechanism.

Reverse Current Protection

A MOSFET’s intrinsic diode between drain and source is reverse-biased when the MOSFET is ‘on’ and forward-biased when the MOSFET voltage polarity reverses. It follows that T2 by itself is not able to block negative-input voltages. These can happen accidentally, for example during a negative transient or a power outage, when the input voltage (VBUS in Figure 2) is low or absent and the DC-DC Converter input capacitor (CIN) feeds the power BUS via the intrinsic diode D2. To block the reverse current, the transistor T1, placed with its intrinsic diode D1 opposing the negative current flow, is necessary. The result is a costly back-to-back configuration of two MOSFETs with their intrinsic diodes oppositely biased.

Integrated Back-to-Back MOSFETs

The need for a back-to-back configuration is obvious if discrete MOSFETs are utilized, like in Figure 2, and less obvious if the protection is monolithic, namely when the control circuit and MOSFET are integrated into a single IC. Many integrated protection ICs equipped with reverse-current protection utilize a single MOSFET, with the additional precaution of switching the device body-diode to reverse-bias no matter the MOSFET polarization. This implementation works well with 5V

MOSFETs, which have a symmetrical structure with respect to source and drain. Source-body and drain-body maximum operative voltage are here the same. High voltage MOSFETs, in our case, are not symmetrical and only the drain is designed to withstand high voltage with respect to the body. The layout of high-voltage MOSFETs is more critical and HV MOSFETs with optimized RDS(ON) come only with the source shorted to the body. Bottom line, a high voltage (> 5V) integrated solution will have to utilize a back-to-back configuration as well.

Motor Drive Applications

In motor driver applications, the DC motor current is PWM-controlled with a MOSFET bridge driver. During the OFF-portion of the PWM control cycle, the current recirculates back to the input capacitor, effectively implementing an energy recovery scheme. In this case, reverse current protection is not called for.

Traditional Discrete Solution

Figure 3 illustrates the high costs, in terms of PC board area and bill of materials, of utilizing a discrete implementation like the one in Figure 2 (24VIN, -100 to +40V protection). The PC board area is a hefty 70mm2.

Traditional discrete protections (70mm2)
Figure 3. Traditional discrete protections (70mm2)

Integrated Solution

Figure 4 shows the advantage of integrating the control and power MOSFETs in the same IC, packaged in a 3mm x 3mm, TDFN-EP package. In this case, the PC board area occupation is down to roughly 40% of the discrete solution (28mm2).

Integrated Protection (28mm2)
Figure 4. Integrated Protection (28mm2)

Integrated Protection Family

A new family of adjustable over-voltage and over-current protection devices provides an example of such an integrated solution. It features a low, 210mΩ, on-resistance integrated FET. The devices protect downstream circuitry from positive and negative input voltage faults up to +/-60V. The overvoltage-lockout threshold (OVLO) is adjusted with optional external resistors to any voltage between 5.5V and 60V. Also, the undervoltage-lockout threshold (UVLO) is adjusted with optional external resistors to any voltage between 4.5V and 59V. They feature programmable current-limit protection up to 1A. The current-limit threshold can be programmed by connecting a suitable resistor to the SETI pin. MAX17608 and MAX17610 block current flowing in reverse direction, whereas MAX17609 allows current flowing in reverse direction. The devices also feature thermal shutdown protection against internal overheat. They are available in a small, 12-pin (3mm x 3mm) TDFN-EP package. The devices operate over the -40°C to +125°C extended temperature range.

Conclusion

Electronic loads require protection from the effects of power outages and fluctuations, inductive loads switching, and lightning. We reviewed a typical protection solution, with its low level of integration that leads to inefficiencies in terms of PC board space and high bill of materials. A new family of integrated, highly flexible, low RDS(ON) protection ICs provides direct and reverse voltage and current protection with a minimum bill of materials and PC board space occupation. With the right protection ICs, "the wagons" are tightly circled, building a perimeter of protection around the load for enhanced safety and reliability.


This article was co-written by Nazzareno Rossetti and John Woodward.

Adding a Capacitive Touch Display Module to the BeagleBone Black

$
0
0
Developers and engineers who want to create intuitive interfaces for industrial and home automation often look for LCD displays with minimal button interaction. Capacitive displays remove the necessity of any buttons while maintaining interactivity. On top of that, it's often necessary to login or update systems remotely. Th BeagleBone paired with a capacitive touch LCD display is a development platform for small, cost-effective solutions.e

The BeagleBone Black is a low-cost, community-supported development platform. The BeagleBone boards are designed as open source alternatives to other development platforms, allowing the designer or engineer to commence development with the BeagleBone and progress onto their own custom systems using the same hardware. All of the schematics, layout files, and bill of materials are freely available.

The board is based around the Texas Instruments Sitara AM335x system on chip Cortex A8 ARM processor. The processor core runs at 1 GHz, has a PowerVR SGX 530 graphics core and is connected up to 512 MB of low-power DDR3L memory clocked at 400 MHz. Peripherals include up to 65 GPIOs, a single USB 2.0 port, 10/100 Ethernet jack, a microSD slot for storage and a mini HDMI connector.

The BeagleBones use stackable daughterboards called 'capes' to attach a wide variety of community-based development boards adding functionality ranging from LCD displays and motor drivers to cellular modems and GPS/GPRS modules. An example of a range of LCD displays designed specifically for the BeagleBone Black is that of the GEN4 series manufactured by 4D Systems. Their range includes 4.3, 5.0, or 7.0-inch primary displays for direct user interaction and information display.

These displays are available in both resistive touch (GEN4-4DCAPE-xxT), capacitive touch (GEN4-4DCAPE-xxCT) and non-touch (GEN4-4DCAPE-xx), where xx is 43, 50 and 70. An optional external button board is available for actions such as up, down, left, right, enter/return, power and reset or as required by the user.

The capacitive touch display comes with a professional looking cover lens bezel, which is a glass front with overhanging edges, allowing the display to be mounted directly into a panel using special adhesive on the overhanging glass.

Getting Started

To get up and going with the BeagleBone Black with the 4D Systems LCD Cape, the following items are needed:
  • BeagleBone Black           
  • 4D Systems 4.3" LCD Displays
  • 4D Systems 4.3" Cape Adaptor
  • 4GB MicroSD card
  • USB to micro SD card adaptor
  • 5 V, 2 A power supply
  • Mini USB to USB cable
  • Wireless keyboard and mouse combo (optional)
  • RJ45 Ethernet cable (optional)
One of the appealing features of the BeagleBone is the gamut of options for interfacing with the device. Using only a mini USB cable, the user can power the board and use a serial interface like Putty.exe or Terraterm to login to the command line. The default username is 'debian' and the password is 'temppwd'.

Alternatively, the BeagleBone will register as a USB device on the host machine, once the correct USB-Network drivers have been installed, the user can then login to the BeagleBone through the web server interface running on the board (Chrome or Firefox, Internet Explorer is not supported) at http://192/168.7.2 - see Figure 1. In this web server interface, it is possible to write scripts in BoneScript in the Cloud9 IDE, which is a Node.js library optimized for the Beagle family using familiar Arduino function calls.


Figure 1. Web server interface running on BeagleBone Black. 

Focusing on using the 4D Systems LCD Cape with the power off, connect the 4D cape adaptor to the BeagleBone Black. Be wary of the correct orientation and not to bend any of the pins as this can damage the cape. Then attach either end of the supplied 30 way FFC cable to the 4DCAPE display. The exposed metal should point upwards and the blue stiffener should face the PCB as in Figure 2.


Figure 2. BeagleBone Black 4DCape Adaptor for LCD Display.

Connect the other side the FFC cable to the adaptor board, ensuring the exposed metal pads facing upwards again as in Figure 3. If attaching any other capes, ensure there are no pin conflicts by checking the BeagleBone schematics. In Figure 3, an EEPROM can be seen on the back of LCD Cape that has a selectable I2C adress via DIP switch. This can be used to resolve I2C address conflicts with any other attached I2C devices.


Figure 3. Back of 4.3" 4D Systems LCD Display.

The BeagleBone comes loaded with Debian 3.8.13 on the on board 4GB eMMC NAND Flash, which unfortunately does not contain the correct drivers or overlays for this display. It is possible to update the Linux distribution, but it takes a little bit longer. The fastest way to get up and going is to walk through this tutorial to load the latest version of Debian to a 4GB microSD card (4.4.54 at time of writing). The download of the Debian Linux distribution could take 30 minutes or more and writing to the micro SD should take another 20 minutes to complete. Other Linux distributions like Angstrom and Android also support the 4DCAPE, but involve more work to get up and going.


Insert the microSD card into the holder on the bottom side of the BeagleBone, while the power is off. Hold down the BOOT button – see Figure 4 - and insert the 5 VDC plug. The BOOT button is a little difficult to access with the 4DCAPE attached, but a small screwdriver should reach. The 4DCAPE draws significant current (typical 620 mA for the GEN4-4DCAPE-43CT), which is far more than any USB port can handle, hence why a 5 V / 2 A external power supply is necessary. The USB jack will not supply power to the 4D Cape unless the solder bridge jumper on the top of the 4DCAPE adaptor board is cut and resoldered.


Figure 4. BeagleBone Black peripheral and button locations.

After a minute or two, the screen should flash white, then a flashing cursor up in the top right can be seen. Plug in the mini USB cable to the BeagleBone while it is powered and the other end to your computer. Start a serial session using Putty.exe or Terraterm with the following settings: 115200, 8, N, 1. The default username is 'debian' and the password is 'temppwd'. Note at this point with the LCD display attached, it is not possible to access the web server interface.

Enabling the Graphical Interface

The capacitive touch screen doesn't work in command line, so it makes sense to use the graphical interface. In order to activate the graphical interface, some small modifcations need to be made to the /boot/uEnv.txt file. Vi, Vim and Nano are all Linux command line text editors that can be used to edit this file. Check out this beginner's guide to the command line text editor, Nano.
The following command will open the file to be edited:

sudo nano /boot/uEnv.txt

Before editing any files, it is recommended to create a backup of the file first. This can be done using the following command;

sudo cp /boot/uEnv.txt /boot/uEnv-Backup.txt

Find the following lines in the uEnv.txt file and change them to the following. This disables the HDMI interface, which conflicts with some of the pins to the LCD cape.

##Beaglebone Black/Green dtb's for v4.1.x (BeagleBone White just works..)

##Beaglebone Black: HDMI (Audio/Video) disabled:
dtb=am335x-boneblack-emmc-overlay.dtb

##Beaglebone Black: eMMC disabled:
dtb=am335x-boneblack-hdmi-overlay.dtb

##Beaglebone Black: HDMI Audio/eMMC disabled:
dtb=am335x-boneblack-nhdmi-overlay.dtb

##Beaglebone Black: HDMI (Audio/Video)/eMMC disabled:
dtb=am335x-boneblack-overlay.dtb

##Beaglebone Black: wl1835
#dtb=am335x-boneblack-wl1835mod.dtb

##Beaglebone Black: replicape
#dtb=am335x-boneblack-replicape.dtb

##Beaglebone Green: eMMC disabled
dtb=am335x-bonegreen-overlay.dtb


Once the changes have been made, save the file and reboot; shutdown -r now
This can take up to a few minutes to restart. Alternatively press the reset button on the board.
The screen should now boot into the graphical interface Openbox. It is useful to have a keyboard and a mouse to fully interact with the interface, but it is not completely necessary. There is only one USB 2.0 port available, so either a wireless keyboard and mouse combo or a USB hub can be used.
Attach an ethernet cable to a DHCP enabled network router and internet access can be obtained through Qupzilla or Chromium. Figure 5 shows the Qupzilla web browser working on the 4.3" LCD Display.


Figure 5. Qupzilla web browser running on 4.3" 4D Systems LCD display.

Due to the size of the screen, some of the programs only show part of the window.

Conclusion

Setting up the BeagleBone Black and 4D Systems LCD Cape is straightforward enough, which means the user can get developing as soon as possible. It is handy that the display overlays are available in the latest BeagleBone Debian distribution.  The total setup time should take less than 90 minutes in total including download times. Once up and running, numerous options are available through the Openbox window manager.

Installing a Tire Pressure Monitoring System

$
0
0


Newer cars typically feature a tire pressure monitoring systems (TPMS) that warn drivers when a tire is significantly under-inflated. This safety feature was deemed important enough that the United States Department of Transportation (DOT) and the National Highway Traffic Safety Administration (NHTSA) published a Federal Motor Vehicle Safety Standard. Tire pressure affects a vehicle's fuel economy, handling, and possible catastrophic tire failure. In this application note, over-inflated detection is also included. The system may also be used for anti-theft, warning when one of the sensors is not sending information (in that case, one of the tires may have been stolen).

There are two TPMS system types. One of them is called Direct System. It is based on installing a pressure sensor in each wheel to directly measure the pressure in each tire, sending the information to the vehicle's on-board computer which warns drivers when the air pressure in any of their tires drops at least 25% below the recommended cold tire inflation pressure, or if the tire has 25% over the recommended inflation pressure. Direct systems are typically more accurate and reliable, and most are able to indicate which tire is under-inflated.

The other system is called Indirect System. It uses the vehicle's anti-lock braking system's wheel speed sensors to compare the rotational speed of one tire versus the others. If a tire is low/high on pressure, it will roll at a different number of revolutions per kilometer than the other three and alert the vehicle's on-board computer. Indirect systems are unable to generate accurate readings in cases where all four tires are losing pressure at the same rate, such as the effects of time and temperature.
In this implementation, the Direct Tire Pressure Measurement System will be used. In this case, the pressure will be measured and analyzed locally on each tire with the SLG46620, GreenPAK configurable mixed-siganl IC (CMIC). The SLG46620 will send the information of under-pressure, over-pressure or correct pressure to the central system (onboard computer or dedicated system) via the communication system (Figure 1). With this implementation, a TPMS may also be retrofitted on older cars by adding a small central system to the console of the car.


Figure 1. Pressure measurement and Preprocessing System schematic block diagram.

Pressure Sensor

In this type of application, choosing the correct sensor is one of the most important stages in the design. An automotive application not only requires a sensor with the correct resolution and pressure range, it also requires certified sensors which are able to be used for automotive safety applications and with a low current consumption.

In this case, there are two options about the type of sensor: Differential Pressure Sensors and Absolute Pressure Sensors. Differential ones measure the difference between the actual pressure and the atmosphere pressure. Absolute ones use absolute zero as its reference pressure, measured relative to a full vacuum (outer space).

Since absolute pressure uses absolute zero as a definitive reference point, absolute pressure remains precise and accurate regardless of changes in ambient or process temperature. This is the main reason for choosing Absolute Pressure Sensors

The selected pressure sensor for this application is the SM5420C-060 from SMI Pressure Sensors. It is an absolute pressure sensor with an operating pressure range of 0 to 60 PSI. The power supply must be 5V (compatible with SLG46620) and with a current low consumption of 1mA. One of the advantages of this sensor is that it is certified to be used for automotive applications, being qualified to meet AEC Q100 standards (the Automotive Electronics Council standard about Failure Mechanism based Stress Test Qualification for Integrated Circuits).

The selected pressure sensor has a differential output proportional to the measured pressure and it can be modeled as:



The output circuit can be thought as a Wheatstone bridge, as it can be seen in Figure 2.


Figure 2. Output circuit – Pressure Sensor.

The most important characteristics of the sensor are shown in Table 1.

ParameterValue
Power Supply5 V
Input Current1 mA
Operating Temperature-40 to 125°C
Operating Pressure0 to 60 PSI
Span100 mV typical (135 mV max)
Zero Offset0 mV

Schematic Diagram

Due to the differential outputs of the sensor, and considering the simpler way for conditioning the signal for being acquired by SLG46620’s ADC, the external implemented circuit is shown in Figure 3.


Figure 3(a). Signal conditioning schematic circuit.


Figure 3(b). Wireless Communication System.

The signal conditioning circuit can be divided into two parts.

First of all, the differential output of the sensor (out+ and out–) is converted to a single ended signal with the operational amplifier U1. This is done by a typical differential configuration with unity gain. With this circuit, the signal obtained at the output of U1 is



It’s important to mention that, if pressure is zero, the output voltage is zero. That’s why the operational amplifier must be a rail-to-rail operational amplifier.

The other part of the design is the second operational amplifier (U2). It is required to condition the level of the signal to meet the input specifications of the SLG46620 analog-to-digital converter.
SLG46620, in the electrical specifications of the datasheet, specifies that the ADC with the single ended configuration, must have a minimum voltage input of 30mV/G (where G is the gain of the Programmable Gain Amplifier of the ADC) to acquire the signal. To obtain this minimum level voltage, the second operational amplifier adds Vmin voltage to the single ended signal coming from the sensor and the first operational amplifier. With this configuration, the output signal (Vout) can be directly connected to the analog input of the SLG46620.



When the specifications of the ADC are considered, the maximum input level voltage in single-ended mode is 1030mv/G. If the worst case is analyzed, the maximum differential output level of the sensor can be 135mV. In this case, it can be risky if the Gain is configured to 8 (because the maximum input level is 137mV) because low pressures may not be compatible with the minimum input voltage of the ADC. For this reason, the ADC and the PGA are configured with a gain of 4. With this configuration, the maximum input level to the ADC is 274mV and the minimum input level is 7.5mV. In the case of the PGA, the linear range is between 23mV to 236mV.

With this configuration, Vmin must be between 23mV and 99mV. The selected value is 60mV, so the output range of the conditioned signal is 60mV to 195m. The Vmin voltage is obtained from the SLG46620 DAC, connecting its output to one of the GPIOs.

Implementation

As described above, the pressure measurement described in this application note is part of a car safety system. The aim of this implementation is to get benefits of the small size and low current consumption of the SLG46620, allowing to measure and process the pressure locally.

Another important benefit of this implementation is the speed of processing. Considering the timing requirement of the NHTSA standard, the SLG46620 processes the sensor data very fast, so the onboard computer is free to make all the necessary verifications before reporting low or high pressure.

The GreenPAK circuit design implementation is shown in Figure 4.


Figure 4. Pressure Measurement block diagram.

The single-ended signal from the sensor is obtained from PIN 8, which connects to the input of the PGA. The PGA configuration is shown in Figure 5. It shows the PGA configured in Single-Ended Mode with a Gain of 4 and it’s always powered on.


Figure 5. Programmable Gain Amplifier configuration.

The output of the PGA is connected to the Analog to Digital Converter. The configuration of the ADC is a single-ended mode, with the RC oscillator as the ADC clock as shown in Figure 6. With this clock configuration, the ADC sample rate is 1.56 ksps.


Figure 6. ADC configuration.

The ADC conversion is analyzed with the DCMP/PWM blocks. DCMP0 compares the pressure with the low limit, indicating when the pressure is lower than the configured value with a low level on its OUT+ output. The DCMP/PWM 0 block is configured as DCMP, comparing the positive input with the value stored in Register 0.

DCMP2 compares the pressure with the high limit, indicating when the pressure is higher than the configured value with a high level of its OUT+ output. The PWM/DCMP 2 block is configured as DCMP, comparing the positive input with the value stored in Register 2.

DCMP0 configuration is shown in Figure 7. The configuration of DCMP2 is the same as the configuration of DCMP0.


Figure 7. DCMP/PWM0 configuration.

To determine the outputs of the system, 2-bit LUT4, LUT5 and LUT6 are used. LUT4 output is high only when a low pressure is detected (low level at OUT+ of DCMP0 and at OUT+ of DCMP2). LUT5 output is high only when the correct pressure is detected (high level at OUT+ of DCMP0 and low level at OUT+ of DCMP2). LUT6 output is high only when a high pressure is detected (high level at OUT+ of DCMP0 and high level at OUT+ of DCMP2). Figure 8 shows the configurations of 2-bit LUT4, LUT5 and LUT6.


Figure 8. From left to right: LUT4 configuration, LUT5 configuration, LUT6 configuration.

The output of 2-bit LUT4 (low-pressure output) is connected to Pin 16, the output of 2-bit LUT5 (correct pressure output) is connected to Pin 17 and the output of 2-bit LUT6 (high-pressure output) is connected to Pin 18.

DAC0 is included in the design as the voltage reference Vmin. It is configured to generate 60mV and is connected to GPIO19 via the VREF0 block. Its configuration is shown in Figure 9.


Figure 9. DAC0 configuration.

Test and Conclusions

To test the implementation, a linear ramp of pressure was applied to the sensor, from a low pressure to a high pressure along the analyzed range. To analyze the results, pins 16 to 18 (in this order) were logged with a logic analyzer. These outputs can be seen in Figure 10.


Figure 10. Pressure measurement and preprocessing outputs test.

It can be seen that the system is tested for the three possible states, obtaining a high level on the corresponding output pin of the SLG46620.

Conclusion

In this application note, SLG46620 and SLG88103 are used in a car safety system application as the ADC and preprocessing unit of a bigger system. We have shown how to condition the signal to meet the ADC and PGA specifications of the Silego GreenPAK and the entire implementation is described. It is important to mention that the values used to compare the ADC conversion can be changed for different car and tire models, without changing the logic of the system.

Changes in Electricity Generation and Use Strengthen the Case for DC Power Distribution

$
0
0
The growing imperative to use energy as efficiently as possible, combined with the need to create electricity from a wider variety of energy sources including renewables, is driving growing interest in DC power distribution over long distances and within buildings.

AC vs DC Distribution: Conflict and Coexistence

Nikola Tesla’s victory over Thomas Edison in the arguments that led the world’s power grids to adopt high-voltage AC distribution could be short-lived: after only about 130 years, the technological pendulum could be swinging back in favor of high-voltage DC (HVDC) distribution.

As things stood in the late 1880s, AC power distribution was considered a better economical and practical proposition, despite some imperfections. Long-distance power transmission occurs at high voltages to minimize I2R losses, and at the time the required step-up was easier to achieve using AC-operated transformers than with the rotating machines needed to generate high DC voltages. Even at the time, suitable transformers were also smaller, less expensive and more reliable.

The arrival of mercury-based rectifiers in the early 20th century, later followed by high-voltage, high-current solid-state thyristors and most recently IGBT modules (Figure 1), generation of high DC voltages became simpler thereby making the advantages of HVDC distribution more accessible and attractive.


Figure 1: Modern power semiconductors such as high-voltage, high-current IGBT modules enhance the economics and reliability of HVDC transmission. Image courtesy of Infineon.

HVDC can overcome some of the drawbacks encountered when distributing power at high AC voltages. HVDC transmission distances and capacity are not limited by inductive and capacitive effects such as charging currents. In addition, the entire cable cross section can be utilized to support current flow, as there is no skin effect. Moreover, AC power does not permit transmitting power between networks operating at different frequencies, or where the networks cannot be synchronized. In addition, concerns over instability or unsuitable power flows can prevent making new connections.
Today, HVDC is alive and well in numerous distribution projects worldwide, such as long-distance links like the Caprivi Link in Namibia and the Three Gorges-Guangdong link in China that are each over 900 kilometers, as well as various cross-border and cross-sea links throughout Europe, New Zealand, the Japanese islands and elsewhere. Back-to-back HVDC connections are also widely installed as a stable and convenient means of sharing power between AC grids.

HVDC in the Grids of Tomorrow

HVDC distribution could become even more pervasive as environmentally-driven changes in power-generation policy drive progress toward distributed power generation supplying hybrid infrastructures that comprise diverse types of grids known collectively as the Smart Grid.

Power generation is moving from a centralized model reliant on a small number of large fossil-fuel power stations to incorporate a variety of sources, such as energy generated by wind or solar farms. The raw output from a wind turbine has unstable voltage and frequency, and so is typically converted to DC to facilitate stabilization, before being re-converted to the required AC frequency and voltage for feed-in to the grid. Similarly, the DC output of a PV array must be converted to AC of the right voltage and frequency before it can be put on the grid.

With the increased reliance on non-fossil-fuel energy sources, generation is becoming more distributed as solar and wind farms are installed in naturally advantageous locations and small local wind or solar generators – microgenerators – are permitted on commercial or residential premises. These do not have to be connected to the grid, although generous feed-in tariffs are sometimes offered to encourage owners to sell any unused capacity to their utility company.

As power generation is moving towards a more distributed model with privately owned microgenerators, whether connected to the main grid or otherwise, the concept of the microgrid is emerging (Figure 2). A microgrid combines localized groups of electricity sources and loads that can be controlled and managed either as islands or when synchronized and connected with the main power infrastructure, or macrogrid. Microgrids can potentially deliver several advantages, including reduction in greenhouse gas emissions, lower utility bills for end users, and a means of sharing the cost of modernizing or upgrading local power infrastructures. Microgrids exist within the concept of the Smart Grid, and can help to realize its key objectives, such as stability, reliability, and security of electricity supply as energy sources become more distributed and less predictable.


Figure 2: A microgrid can be powered from various sources, and connected to the main grid or islanded. Image courtesy of Berkeley Lab,

Changing Consumption Argues for DC Microgrids

Alongside the changes taking place in the distribution infrastructure, the power needs of electricity users have also changed significantly since the early decisions regarding electricity distribution were made. Not only has overall power demand risen steadily, but the nature of the equipment being used in typical homes and businesses has also changed.

For much of the second half of the 20th century, electricity in even the most state-of-the-art homes was consumed predominantly by AC-powered incandescent lighting and a small number of appliances such as washing machines, refrigerators and dishwashers containing large AC induction motors. In contrast, in today’s homes, there is a shift towards LED lighting, which is an inherently DC-powered technology, as well as new and more energy-efficient appliances that contain variable-speed drives featuring smaller and sometimes brushed or brushless DC motors. There are also now large numbers of electronic devices such as PCs, routers, game consoles, and set-top boxes that operate from an internal or external AC/DC power supply. Many other devices, such as tablets, smartphones or cordless power tools, although battery-powered, are recharged at low DC voltages. Today’s homes contain more electrical and electronic devices than ever before, each containing its own circuitry to convert the standard high-voltage AC-line supply to a suitable low DC voltage at the point of use.

Residential homes are not the only premises whose electricity consumption is dominated by essentially low-voltage DC-powered equipment. The telecom switches and data centers at the heart of today’s digital economy are among the largest consumers of power in the modern world. One data center can consume as much energy as many thousands of homes, although the goal is to power servers that require DC rails at voltages down to 1V or less.

In the never-ending search to maximize efficiency and eliminate as far as possible any energy losses, the losses incurred during power conversion, between AC and DC as well as between different DC voltages, is coming under increased scrutiny. Some within the data center industry are advocating HVDC distribution as the most efficient approach for distributing power within the premises.


Figure 3: Conventional AC distribution in data centers involves multiple conversion steps. Image courtesy of Vicor

A conventional strategy for data center power distribution (Figure 3) initially steps-down and rectifies the incoming AC line supply to allow interconnection with a battery backup system. The output of this network, which is often 48V DC, is then converted to an AC voltage of about 200V for distribution within the building. At the cabinet level, this high-voltage AC supply is rectified and down-converted to an intermediate DC voltage and ultimately converted at the point of load to provide the desired power rails for the processors and other ICs on the server boards. Alternatively, the output from the high-voltage AC/DC power supply and battery backup network, typically at 48V DC, is distributed throughout the data center and then converted by a combination of intermediate and point-of-load converters on the server boards.

As data center power demands continue to rise, concern is growing over the energy lost through multiple power-conversion stages. In 48V DC systems, I2R losses are considerable when supplying the latest servers that can consume well over 1kW at peak power. While discussion continues about how to make AC distribution systems more efficient, HVDC distribution at a voltage of 380-400V promises a way of reducing I2R losses while also eliminating the DC-to-AC inverter and its associated losses. The incoming AC supply at line voltage is rectified and converted to a high DC voltage of 380-400V. Large amounts of power can be distributed at this high voltage at very low current, resulting in minimal energy loss. The high voltage is then stepped down using straightforward DC-to-DC conversion in the cabinets. HVDC architectures have been proposed that can be adopted incrementally in data centers that are currently either using high-voltage AC or low-voltage DC distribution.

This trend within the data center industry is strengthening the case in favor of DC power distribution within buildings. The EMerge Alliance is championing the cause of DC distribution in commercial buildings, and developing standards that aim to enable hybrid AC and DC infrastructures to maximize flexibility as well as efficient use of energy and space. The Alliance has published standards for 24V DC distribution within occupied spaces, and for hybrid AC and 380V DC distribution in data centers (Figure 4) and telecom offices.


Figure 4: The Emerge Alliance has developed standards for high-voltage DC distribution in data centers. Image courtesy of EMerge Alliance

The 24V DC standard proposes an in-building microgrid for powering loads such as lighting, PCs, projectors or televisions. More than 60 products from well-known manufacturers have so far been registered, including LED lamps, fluorescent ballasts, overhead fans, converters, control units and wiring products. The Alliance is also developing a standard for DC microgrids to supply loads in outdoor spaces such as exterior lighting and electric-vehicle chargers. In addition, 380V DC could be proposed for powering large building loads such as HVAC systems, industrial equipment, and high-bay lighting, as well as domestic appliances such as ovens and washing or drying machines with variable-speed drives.

Conclusion

The contest between AC and DC distribution techniques, often imagined as a battle won by AC at the beginning of the electrical age, has in fact evolved into peaceful coexistence as engineers have developed technologies to take advantage of the strengths of either approach depending on the needs of any given application.

DC distribution at very high voltages over long distances, or within localized areas or throughout buildings at voltages such as 380V or 24V, is likely become more prevalent as power-generation policies move to integrate more renewable energy sources and micro-generation, and as end-user demands continue to shift towards low-voltage electronic equipment. This may encourage widespread adoption of DC microgrids, which could, in turn, drive significant redesign of all sorts of equipment from home electronics and PCs, to battery chargers and power adapters as well as wall outlets and light switches.

Basics of Digital Down-Conversion in DSP

$
0
0
This article discusses digital down-conversion which is a digital-signal-processing technique widely used in digital radio receivers.

Digital down-conversion is a digital-signal-processing technique that is widely used in digital radio receivers. This article will review the basics of a digital down-converter (DDC). We’ll first look at the advantages of using a DDC rather than its analog counterpart. Then, we’ll discuss an example and explore the basic operation of a DDC.

To understand the advantages of using a DDC, let’s first review a traditional dual-down-conversion receiver and examine its drawbacks. The basic dual-down-conversion receiver is shown in Figure 1. As you can see, there are several analog blocks before the signal is digitized by the analog-to-digital converters (ADC).


Figure 1.Click to enlarge.

The following section reviews the basic functionality of each of the blocks used in the above receiver. If you’re familiar with the basics of RF engineering, you can go through the next section to refresh your knowledge; otherwise, you may want to begin by reading some pages from AAC’s RF textbook.

The Basic Dual-Down-Conversion Receiver

In the receiver of Figure 1, the first bandpass filter, BPF1, performs image rejection for the first mixer, labeled “RF Mixer” in the figure. It also partially suppresses the interferers picked up by the antenna. This relaxes the linearity requirements of the low-noise amplifier (LNA).

The output of the bandpass filter is amplified by the LNA. This amplification makes the noise that will be contributed by the following stages relatively small in comparison to the desired signal. In this way, the receiver becomes less sensitive to the noise of the stages after the LNA.

Then, the amplified signal at node B is down-converted to the intermediate frequency,
, by the RF mixer.
Now that the desired signal has been down-converted to a lower frequency, we can more easily build a relatively high-Q filter, BPF2, and partially perform the channel selection. Note that, thanks to the dual-down-conversion structure of the receiver, the intermediate frequency of the first mixer,
, can be relatively high. This relaxes the requirements of BPF1.
Next, the signal goes through a quadrature mixer driven by Oscillator 2 (see Figure 1). The frequency of Oscillator 2 is equal to
, so that the center frequency of the desired band will be translated to DC. This means that we won’t need an image rejection filter for the IF mixers.

Next we perform channel selection by means of the baseband low-pass filters (LPFs), and finally, the ADCs will digitize the desired signal and the result will be further processed by the digital signal processor (DSP). The DSP engine will perform operations such as equalization, demodulation, and channel decoding.

Drawbacks to the Traditional Radio Receiver and the Solution

We can consider three main limitations of the dual-down-conversion receiver shown in Figure 1:
  1. The two baseband paths must be highly matched. The IF mixer, LPF, and ADC in the blue path must be matched with the corresponding components in the green path.
  2. The analog filters introduce phase distortion.
  3. The ADCs inject a DC term that cannot be easily removed from the desired information. Note that the IF mixers of Figure 1 translate the center frequency of the desired channel to DC, where the ADC can inject an error term. This ADC offset can be produced by the offset of its building blocks such as amplifiers and comparators. The offset term leads to a non-zero digital code even when a zero signal is applied to the ADC. This can be very important in systems that convey information at very low frequencies.
We could remedy these imperfections in the DSP portion of the receiver; however, a better solution would be putting the A/D converter before the quadrature mixers in the receiver chain. This is shown in Figure 2.


Figure 2. Click to enlarge.

As you can see, now the A/D conversion takes place at the IF rather than at baseband. This means that the ADC will have to operate at a higher sample rate. As shown in the figure, the blocks after the ADC are all operating in the digital domain. For example, the outputs of Oscillator 2 in Figure 2 are actually the digital values corresponding to the sine and cosine signals. To implement Oscillator 2, we generally use a direct digital synthesizer (DDS). The second down-conversion is performed using two digital multipliers, and the LPFs are digital filters.

As mentioned above, with the structure of Figure 2, the ADC will have to operate at a higher sample rate. This could be considered a disadvantage, but the DDC approach also offers significant benefits:
  1. Now, the IF mixers and the LPFs are digital circuits. Hence, imbalance-related distortions, which arise from the mismatch between analog components, have been eliminated.
  2. Unlike in the analog domain, we can easily design linear-phase digital filters.
  3. The DC term injected by the ADC can easily be removed by a digital filter before the signal goes through the IF mixers (see chapter 12 of Digital Front-End in Wireless Communications and Broadcasting for an example).

Note that, while Figure 2 has the quadrature mixers and LPFs outside of the receiver’s DSP engine, we could certainly implement these blocks within the system’s DSP platform. Also, after the baseband LPF, we can reduce the sample rate significantly without losing the desired information (see my article on multirate DSP and its application in A/D conversion for more information). Thus, we can redraw the circuitry inside the dashed box of Figure 2 as shown in Figure 3. This block is called a digital down-converter, or DDC.


Figure 3

Digital Down-Conversion

Let’s assume that after analog-to-digital conversion the spectrum of the desired signal is as shown in Figure 4.


Figure 4

The desired signal is centered at 110 MHz, and it has a bandwidth of 4 MHz (the diagram shows both the positive and the negative frequencies). Also, we assume that the ADC is producing samples at a rate of 440 MSPS (mega samples per second). How will the DDC process this input?
The DDS employed by the DDC will generate 110 MHz sine and cosine signals. Each of these sine and cosine functions will lead to impulses at
MHz. Since multiplication in the time domain corresponds to convolution in the frequency domain, we will get the spectrum shown in Figure 5 for nodes A and B in Figure 3.


Figure 5

As you can see, a frequency shift of
MHz has translated the blue spectrum of Figure 4 to both 220 MHz and DC. Similarly, the green spectrum is shifted to both DC and -220 MHz. We are able to use one plot for nodes A and B because these two nodes have the same amplitude characteristics, and Figure 5 conveys only the amplitude spectra. The phase spectrum of node A will be different from the phase spectrum of node B.

In Figure 5, note that the signal sidebands overlap around DC after downconversion. Considering this overlap, can we recover the desired information using only the part of the spectrum that is centered around DC? Yes we can; we are using quadrature mixing, which generates two identical amplitude spectra but also two non-identical phase spectra, and the phase spectra of the overlapping region allow us to recover the original information. Since this overlap is not a problem, the frequency components above 2 MHz don’t provide any necessary information, and consequently we can put an LPF after the digital mixer to keep only the frequency components below 2 MHz. This low-pass filtering, depicted as a single-stage filter in Figure 3, is generally implemented as a two-stage filter, as shown in Figure 6.


Figure 6

The first stage, LPF1, can be designed to eliminate the high-frequency components centered at 220 MHz. To this end, we need an LPF with a passband that extends to about 2 MHz and a stopband that begins at about 218 MHz. This filtering operation is sometimes referred to as filtering the image signal created by the DDS.

The second stage, LPF2, eliminates any unwanted frequency components between 2 MHz and 218 MHz. After LPF2, the signal contains no frequency components beyond the intended information bandwidth (i.e., 2 MHz), but we are still using 440 MSPS to represent this signal. Hence, we can apply the downsampling concept to reduce the sample rate.

A more efficient implementation would be to break LPF2 into a cascade of stages and perform part of the overall downsampling after each of these stages. Again, for more details about FPGA implementation of a DDC, please read Chapter 12 of the book I mentioned above.

Conclusion

In this article, we examined the benefits of using a DDC. We saw that a DDC can improve the performance of the basic dual-down-conversion receiver: It can eliminate imbalance-related distortion created by an analog IF mixer and it avoids phase distortion from analog filters. After the DDC, the sample rate is significantly reduced and we can have a more efficient implementation of the DSP routines that further process the data. 


To see a complete list of my articles, please visit this page.

How Near Field Communication is Changing Our Mobile Life

$
0
0
Near field communication (NFC) has made its way into many applications, including security, payment methods, and even access to areas and items. There's no doubt it's had an impact on how we develop and use hardware—but has it been a boon or a risk?

Near field communication (NFC) is a protocol that was first defined in 2003 for use in radio-frequency identification (RFID) technology, in standards distributed, promoted, and certified by the NFC Forum.

The standard describes the way in which two devices could be placed in close proximity (1.6 inches) to exchange data between one another to induce magnetic induction. Once a connection is established, data can be exchanged at a rate of 424 kbps on 13.56 MHz frequency in one of three modes: peer-to-peer (two-way exchange of information), read/write-only (one-way exchange of information), or card emulation mode (how NFC payment operates).

The first NFC-enabled phone was the Nokia 6131 flip phone released in 2006. An Android enabled NFC phone wasn’t released until 2010 on the Nexus S, and Apple didn’t include it in the iPhone until 2014. Over time, however—especially as NFC payments become much more ubiquitous among typical smartphone users—increasingly creative uses of NFC are being adopted. Here’s a look at some of the ways NFC is already being used on your smartphone.

NFC Contactless Payment

Sometime in the early 2010s, banks and credit card companies began to issue cards that included RFID chips for contactless payments. The motivation for them was simple—the easier it was to make a payment for something, the more a consumer was likely to spend. While transactions became suddenly much more convenient, concerns over the security became much more obvious.

The problem with initial RFID technology in payment methods was that, if you lost or misplaced your debit card, a third party could easily use it without knowing your PIN. To mitigate this risk, many financial institutions imposed spending limits on contactless payments. There was also a period of time where RFID skimming was possible, although today RFID-enabled cards are encrypted to prevent this sort of theft.

The first smartphone-enabled NFC payment application came from Google through Google Wallet (now Google Pay after merging with Android Pay) on the Nexus S. Apple released Apple Pay in 2014, making it possible to make payments using an iPhone (6 or later) and the Apple Watch.

Image courtesy of MobileAppCost.

It may seem counterintuitive to include all of your contactless payment card information on your smartphone. But consider that this method may still be more secure than carrying those cards in your wallet. Afterall, most phones require a pin, password, or biometric ID to access your smartphone to make the payment (a process that is still more convenient than entering your bank PIN on a point-of-sale terminal). If your phone is lost, there is still that added layer of security, as well as the possibility to remotely wipe your device if the feature is enabled.

For many, there's a general unease in making your smartphone an even more critical point of failure if lost. Beyond that, NFC payments include privacy concerns about whether various companies are tracking your spending habits through apps.

Access Control

Apple has maintained a high degree of control over the NFC hardware in their devices, initially limiting the use to Apple Pay only. Slowly, they’ve been opening the module up to app developers, and currently provide access through a Core NFC framework. This limit to access of new hardware is not unusual with Apple—another example being the limited access to the fingerprint reader after it was first released as a feature. Other limits include only enabling NFC from an app while the app is open in the foreground.

Recently, Apple announced their plans to expand the NFC module use to everything from accessing hotel rooms, using the iPhone as a transit pass, to opening car doors. It is also reported that employees on the Apple campus use their devices for access control in the building. Further, Apple reports that it plans to expand capabilities for developers in Core NFC.

Some projects already exist that allow users to unlock their cars with their smartphones using NFC. Once again, one might wonder about the security of having this capability on your phone since you can misplace or lose it, although it could be seen as not unlike misplacing or losing your car keys. In the event of losing your car keys, it’s usually a bit easier to identify the car it belongs to since keys usually feature the car manufacturer's logo.


Hardware-Based 2FA

Two-Factor Authentication (2FA) can add an important layer of protection when it comes to accessing accounts, devices, or other secure information. Instead of just using only a password, a secondary authentication response is required.

Right now, there are many ways 2FA is implemented. You may receive a text message or email with a code or you might have a key dongle with regularly refreshing access codes or you may have an app on your phone. However, these methods are still just inconvenient enough that many people still don’t bother with it. However, with NFC-enabled smartphones, 2FA can be streamlined and become a lot more convenient.

That is the objective of YubiKey, which is taking advantage of Apple’s expanded Core NFC access to enable tap-and-go 2FA using a hardware key. YubiKey has already established itself as a hardware 2FA key for laptops, in which the key can be inserted into the USB port to authenticate computer user login, logging into email accounts like Gmail, or cloud storage like DropBox.

Image courtesy of Yubico.

On the iPhone, the same keychain dongle can be used to authenticate access to the LassPass password manager, but with Apple’s most recent announcement, there is anticipation that the YubiKey will eventually be capable of 2FA for many other apps. This sort of application of NFC can make it even harder for an adversary to access sensitive data on your smartphone: even if they somehow bypass your password/PIN/biometric ID, the hardware key would still be required. NFC also makes this form of 2FA more convenient and more likely to be used.




Have you used NFC in a design? What sort of applications would you want to see NFC used for?

The Drone Takedown: Battelle’s DroneDefender

$
0
0
Battelle introduces a new aid in the defense against rogue drones.

Drones quickly become an invention which seemed good in theory, but is now annoying in practice. While most people are happy flying their GoPros around abandoned buildings and Scientology compounds, it only takes one bad guy who wants to drop drugs by drone into a jail to ruin things for the rest of us. For every company that wants to deliver cheap hand soap to your doorstep, there's an organization calculating how many lives it can destroy in coordinated drone attacks.

The problem is that drones entered the market long before there were any regulations on them. Though the US is set to have new rules in place by the end of this year, states have left to their own devices, which means uneven and vague laws across the country. It's still totally legal to take aerial shots pretty much anywhere, and that means paparazzi have free range to stalk celebrities from the comfort of the sky. Even Kanye West is concerned.

And yet, with the relative freedom drones still enjoy, there's really no good way to stop them. Though using them as skeet shooting practice is tempting, it's still illegal to shoot down a drone, even if it's flying over your own house.

Boeing's solution to the entire debacle was to release a drone-killing laser cannon. It works, but isn't available to the general public, and is insanely expensive. A more viable alternative has arrived from a company called Battelle that has developed the DroneDefender. The concept is simple: interrupt GPS and remote control signals to bring the droid to the ground. See the video below for the device in action.


The DroneDefender is portable and cost effective. It hasn't received its FCC endorsement and there's no word if it will be available to the general public after the endorsement is received, but it's a step in the right direction.

It's also a pretty simple idea: signals are relatively easy to intercept, it's just difficult intercepting them at a distance. And that's another downside to the DroneDefender: it only works at up to 400m. That means if the military wants to stop a drone attack, it would first have to locate the drone, get within 400 meters of it, and then attempt to land it. But solutions need to happen, and happen fast: ISIS already has drones, and as they absorb more terrorist cells into their own, the amount and power of their drones will only increase.


The bottom line is that the DroneDefender is a useful aid in the defense against drones, but we're still a long way away from protecting entire countries from malicious drone attacks, and still far away from protecting even our own homes. 

Welcome to the World of Wireless Payments

$
0
0
Work hard for your money? Want to keep it? Read on.

Identity theft and its accompanying crimes are so ubiquitous that the four major credit card companies (MasterCard, Visa, Discover, and American Express) set an October 1, 2015, deadline for credit card systems to switch to more secure "smart" chip cards. But while merchants and credit card companies scramble to upgrade their cards and terminals, other companies are turning to wireless payments to make everyday transactions just about hackerproof.

Here's a rundown on the wireless payment contendors and the technology behind them:

Apple Pay



Apple Pay uses NFC (Near Field Communication) to transmit a one-time authorization code to the payment terminal. The authorization code is only released when a fingerprint is successfully scanned (or, if fingerprint ID is not working or not enabled, a passcode will suffice). The iPhone's internal security chip combines with the POS terminal to generate a cryptogram and then attach it to the customer's personal account number. That number is then sent to the bank, which processes the transaction. So even though the entire process takes just seconds and the money is shown deducted from your bank account, there is still processing happening on the bank's back end. If a hacker manages to crack the authorization code sent to the payment terminal, congrats: the code is useless.
However, Apple Pay has its limitations: it only works on iPhones 5s and newer and with the Apple Watch. It also only supports a handful of credit and debit institutions (though that number is steadily increasing), and is only currently available in the US and UK. Still, it's much safer than traditional credit and debit cards. Plus, it's admittedly pretty fun to use.

Google Wallet



The Google Wallet has been around since 2011, making it the granddaddy of wireless payments. The Google wallet handles payments differently through the use of a Google Wallet Virtual card, which isn't the same as your bank card. The user has to first download the Google Wallet app, then enter the pincode. The Google Wallet Virtual Card then communicates with your preferred card, sends that information to the merchant, then and on to the payments processor. But that adds an extra step to the transaction, even if it does mitigate fraud, and it means that your actual card information still sits on Google's servers in encrypted form--not ideal, considering that those servers can still be hacked. Google's FAQs say, "Your actual credit card number is not stored. Only the virtual prepaid card is stored and Android's native access policies prevent malicious applications from obtaining the data. In the unlikely event that the data is compromised, Wallet also uses dynamically rotating credentials that change with each transaction and are usable for a single payment only. Finally, all transactions are monitored in real-time with Google’s risk and fraud detection systems."

Remember, too, that Google makes its money by being a big data gatherer and analyzer: the use of Google Wallet is free to the customer, but ends up costing Google through server processing and real-time monitoring. It's probably making up for those costs by analyzing shopping patterns gathered by Google Wallet customers.

Recently, though, Google has shied away from promoting Google Wallet as its wireless payment method of choice and has been using it for mostly peer-to-peer payments, similar to PayPal. Instead, Google is now promoting...

Android Pay



Android Pay relies on NFC communication and essentially replaces Google Wallet. It works nearly identically to Apple Pay's tokenization idea, in that no sensitive information is ever shared with the merchant, and that token is only authorized with a fingerprint or a passcode. The Apple Pay app needs to be downloaded and set up once, and then pops up automatically when it senses an NFC signal. However, while Apple's Secure Element is integrated into the hardware of its devices, Android Pay needs the cloud to generate its tokens. Though it can store a limited number of tokens on the phone, if you plan on making multiple payments in an area with a poor signal, you may be out of luck.

The downside, however, is that it only works for Android users, but an NFC signal is an NFC signal, so if you see an Apple Pay sign, chances are the POS will also work with Android Pay.

Samsung Pay


The newest to join the wireless payment fray, Samsung Pay works on its Galaxy S6 Edge+, Galaxy Note 5, Galaxy S6 and S6 Edge and is available to use in South Korea and the US. Unlike Google Wallet and Apple Pay, Samsung Pay works in almost every store, even those still using the magnetic swipe-to-pay POS terminals, and merchants don't need to sign up for any new programs or upgrade their hardware. The payment system uses similar technology as Apple Pay (NFC and fingerprint or passcode authorization). The technology that allows it to work with magnetic strip readers is unusual: it's called a Magnetic Secure Transmission and was developed by a company called LoopPay, which Samsung acquired earlier this year.

While anyone can buy a separate LoopPay device that slides onto the back of your existing smartphone, Samsung has integrated the technology into its Samsung Pay phones. A tiny metal coil bent into a loop creates a magnetic field that communicates with magnetic credit card readers the same way swiping a card does. What's interesting is that MST technology is actually older than NFC technology, which means merchants don't have to upgrade their POS systems. Samsung guarantees its Samsung Pay to work with 90% of current US pay terminals. A hefty number, and certainly higher than Google Wallet or Apple Pay.

The downside? It won't work with your iPhone or with most other phones, for that matter. But Samsung is so confident in its new technology that it's willing to pay you to use it.

The Bottom Line


For security and compatibility, Samsung Pay wins this round, but Apple Pay is close behind if more merchants will adopt NFC technology. We've come a long way from the Google Wallet app, though, and lightyears beyond plastic cards. Wireless tokenization and fingerprint authorization may be the keys to permanently stopping thieves from stealing credit card numbers....though they'll just move business to online fraud. Still, there's hope to protecting everyday transactions from those who would gladly steal from the unsuspecting card user. 

New Solar Panel Design Tackles Solar Energy’s Achilles Heel by Harvesting Energy from Rain

$
0
0
Researchers from Soochow University have developed a prototype solar panel that can harvest energy from rain. Could this be a new direction for solar energy or is it merely a neat science project?

For the past two decades, green energy sources have been heavily developed and invested in. It was not long ago that there where many contenders in the energy market including solar, wind, geothermal, hydroelectric, and even fusion. It seems, however, that fusion is always 20 years away, and that wind, geothermal, and hydroelectric systems each have their own set of hurdles or limitations that make them challenging to implement.

Solar, for its part, has largely proven to be a reliable electrical source for many places around the world, but even solar has drawbacks. Firstly, solar energy is dependent on sunlight, so it can only generate electricity during the day (excepting bright moonlit nights where some solar panels can produce as much as 0.02% of their capacity). Secondly, solar energy output also drops during overcast and rainy days which makes solar energy most efficient in hot, dry places where there is little rain or clouds. Since a good portion of the northern hemisphere has a wet overcast climate, solar panels in places such as the UK and US face some serious energy penalties.
If only engineers and scientists could find a way to harness more energy during rain…

Harnessing the Energy of Rain with Nanogenerators

Scientists from Soochow University in Taiwan have created a solar panel that combines triboelectric nanogenerators with the top layer of the solar cell. A “triboelectric nanogenerator” is a nanodevice that converts mechanical energy into electrical energy—but, unlike piezoelectric devices, triboelectric nanogenerators take advantage of dissimilar materials producing a static charge when rubbed together.

Older designs of solar panels with integrated nanogenerators had the nanogenerator layer on top of the panel but this resulted in a lack of transparency and therefore a reduction in solar energy efficiency. The Soochow team instead created a nanogenerator layer that doubles as the top layer for the solar panel. The resulting panel can generate electricity when it is sunny and when raindrops hit the solar panel.
                                                                                                                                                                                 

Solar panels are one of the most popular renewable energy sources. Image courtesy of Mike Buckawicki

But Soochow University is not the only entity attempting to create an all-weather solar panel; a second team from Yunnan Normal University and Ocean University of China have created a panel based on graphene. When raindrops hit the aqueous graphene, it causes the raindrop to disassociate into positive (salts, etc.) and negative ions (free electrons in the graphene). These ions, along with the graphene layer, interact to create a capacitive effect which can store electrical energy in the form of a potential difference. But this design is still in the early days and has yet to advance beyond “proof of concept” before it can be considered as a viable source of electricity.

How Much Energy Can You Get from Rain? A Thought Exercise

So, Soochow University has created a panel that can generate electricity from rain—but how much? Interestingly, the team announced a solar panel efficiency of 13%, which falls within the standard 10-15% efficiencies of commercial panels. This means that the nanogenerators are clearly not impeding typical solar energy harvesting, but how much energy is actively generated from the rain? Currently, there is no answer but we can determine what energies we can expect and if they are viable at all with a little bit of mathematics and a good amount of speculation.

To calculate how much energy we could get from raindrops in the form of mechanical energy, let's try to calculate the kinetic energy of each raindrop and the average number of raindrops falling per square meter per unit time. I'll use the UK—my home and a famously rainy area—as an example area.

Raindrops are rather small—this 2004 paper from the American Meteorological Society says that raindrops do not typically exceed an average diameter of 2.5mm. Since raindrops are almost perfectly spherical, we can calculate the volume and therefore the mass of the raindrop. A diameter of 2.5 mm gives us a volume of 8.18mm3, which contains 0.00818ml of water and thus the mass of said raindrop will be approximately 0.00818 grams or 8.18e-6 kg. The terminal velocity of a typical raindrop is 10 meters per second and therefore the energy that a falling raindrop has is approximately 0.000409J or 4.09e-4 J.


Each raindrop contains some amount of mechanical energy.

For the UK the average rainfall is 885mm per year and, since this measurement is irrespective of area, then we can make a broad calculation of how much rain makes up a square meter of rainfall! The total volume of a 1 x 1 meter area whose annual rainfall is 885mm is 0.885m3 and therefore the number of raindrops in this area is approximately equal to (using the raindrop volume of 8.18mm3) 105,000,000 and therefore the combined kinetic energy is 42,945J. Considering that there are 133 rainy days in the UK then the total energy rate output can be calculated. There are 11491200 seconds in 133 days and, knowing the time and total energy, the average energy output from rain's kinetic energy is 0.0037W / m2.

This energy reading is almost insignificant, which may suggest that the raindrop interaction with the material produces more energy. However, energy is always conserved and rubbing two materials together to create a static charge cannot be greater than the work done needed to overcome the friction force between the materials. The mathematics here rely on assumptions and approximations—but, for rain harvesting to be economical, the energy figure above would need to be several orders of magnitude greater. That may be a tall order. Rain would probably produce more energy if it was collected into a water storage unit and then allowed to empty and pass a turbine in a similar fashion to hydroelectricity.

Again, this is a thought experiment and any actual calculations of these panels' energy harvesting capabilities will need to come from the researchers, themselves. If any stray meteorologists have input on my maths here, please do let me know in the comments below.

So where does this leave “all-weather” solar energy?

In order to see how these new panels could be used to help tap more energy from the environment, we need to see data on the nanogenerators and how much electricity they generate. There are several techniques through which this nanogenerator technique could contribute to the renewable energy industry, possibly by improving the efficiency of solar panels or by being incorporated into energy harvesting facilities in other ways.

The concept of rain-energy harvesting has sparked imaginations and innovations for years, oftentimes using piezoelectric polymers to capture the mechanical energy of raindrops. This technique has been demonstrated by researchers at CEA/Leti-Minatec in France and even an enterprising 14-year-old in 2014 for a Google Science Fair project.


This most recent research is rather unique in that it represents a will to combine solar with rain-generated energy while utilizing triboelectric nanogenerators. We're not exactly likely to see nanogenerators incorporated into solar panels on every roof any time soon—but this work can provide important context for future innovations. 

Are Fresh Water Boundaries the Future of Energy Harvesting?

$
0
0
Researchers have developed a nano-sized membrane that can capture energy from osmosis. Is this the dawn of "blue energy"?

Capturing Energy in Estuaries

There are quite a number of renewable energy sources as of today. The "green" movement has seen the rise of energy harvesting from sources like solar, geothermal, and hydro-power.

However, some days, the sun doesn't shine and the wind doesn't blow, causing a limit on how much power some of these sources can produce. So what harm could one more source of clean energy do?
Researchers at the Ècole Polytechnique Fédérale de Lausanne (EPFL) might have found a way around the limitations set on some renewable sources of energy.
Their new energy source? The boundaries between seawater and fresh water.


An example of a seawater and fresh water boundary in Alaska. Image courtesy of Jane and Phillip Boger

On July 13th, 2016, the research team's discoveries were published in the journal, Nature. The team developed a unique way to produce a large amount of energy at any fresh-seawater boundary.
The energy is produced through osmosis, which occurs when water with a high salt content comes in contact with fresh water through a permeable membrane. When seawater enters fresh water, salt ions are passed through, which have an electrical charge. The ions continue to pass through until the salinity of each body of water is at equilibrium.

So what does all of this have to do with renewable energy? If one can create a layer of material and place it at the crossing point the bodies of water, the electrical energy passed through the salt ions could potentially be harnessed for use.

This is exactly what the researchers at EPFL's Laboratory of Nanoscale Biology have done. They created a membrane that is comprised of only three atoms—yes, only three. This semipermeable membrane, which is made of MoS2 (molybdenum disulfide), is used to separate the two fluids with different salt concentrations.

Designing a Nano-Scale Membrane

Knowledge about the process of ions transferring between fresh and seawater isn't new, but rather it is EPFL's membrane that is cutting-edge. By producing such a small membrane, a greater current can be obtained due to the decrease of resistivity.

Rendering of the membrane and water molecules. Image courtesy of Steven Duensing, National Center for Supercomputing Applications.

Molybdenum disulfide is rather cheap as a material, which is a huge plus if the membrane were to be scaled across hundreds of miles. The membrane is composed of a vast amount of pores (of the nanometer scale) which allows the salt ions to pass through the membrane and harnesses the ions charge to generate electricity. Essentially, the ions pass through the nanopores and their electrons are transferred into electrodes.

Fortunately, the researchers' choice of material in the membrane allows for the positively charged ions to pass through whilst repelling most of the negatively charged ions. This creates a potential difference and effectively a voltage difference between the two liquids as the positive ions and negative ions build up. It is this voltage that allows the generated current to flow readily.
Jiandong Feng, the lead author of the research, has expressed that one of the challenges his team faced was determining the size of the openings in the membrane that the molecules can pass through. These openings are called nanopores.

According to Feng, making the nanopores too large would allow negative ions to pass through. Those electrons carried through the membrane would essentially be wasted and the voltage gathered by the membrane would be low as a result. On the other hand, if the nanopores are too small, potentially fewer ions could pass through the membrane at all leaving the current ultimately too weak.

Potential for Renewable Energy

The use of such a small membrane could lead to an astonishing amount of energy harnessed from a simple chemical reaction.

According to the researchers' analysis and calculations, from a 1 m2 MoS2 membrane with only 30% of its surface coated by the nanopores, 1 MW of electricity can be produced. That's enough to provide power to 50,000 energy saving light bulbs!


The molybdenum disulfide compound is naturally found and can also be created by a process known as chemical vapor deposition. With this material being so readily available, the membrane can be scaled for a larger amount of energy production. However, scaling is still a challenge because creating uniform pores throughout the membrane is proving difficult.

Nevertheless, this is a problem that can be eventually solved. The research team ran a nanotransistor from the current that was generated through a single nanopore and found that it demonstrated itself to be a self-powered nanosystem.

Nano-Membranes in Creating Potable Water

EPFL is not the only one researching this topic. They have been working in tandem with a group of researchers led by Professor Narayana Aluru at the University of Illinois at Urbana–Champaign. The US-based team has aided the EPFL team with molecular dynamic simulations to help predict molecule behavior and better design the membranes and nanopores.

Aluru's interest in working with molybdenum disulfide membranes, however, does not focus on the potential for harvestable energy. Instead, his team has been looking at these membranes as a way to convert salt water into potable fresh water.


A rendering of the molybdenum sulfide membrane (with nanopore) being used as a filter to create fresh water. Image courtesy of the University of Illinois.

In short, this membrane technology could leverage the basic process of osmosis to not only produce sustainable energy but also provide drinkable water.

Blue energy is a growing trend which could gain massive traction in the upcoming years as the technology advances.


Once their engineering model and calculations become more efficient, we could see a future where blue energy is widely available. The simplistic process known as osmosis could play a tremendous part in generating renewable energy and beyond.

What is the Relationship Between Fracking, Sinkholes and Earthquakes?

$
0
0
What is the relationship between fracking, sinkholes and earthquakes?
Prior to 2009, most earthquakes in the U.S. occurred in California. But since 2009, towns and cities across the central and eastern United States saw a dramatic rise in seismic activity, earthquakes and sinkholes. The U.S. Geological Survey earthquake hazards program cites that beginning in 1978 through 2008, the central and eastern parts of the U.S. experienced 844 earthquakes magnitude 3 and greater. During the period from 2009 to 2013, that rate jumped to 2,897 earthquakes – a 343 percent increase – and it keeps rising. In 2014 alone, more than 659 M3 plus earthquakes were recorded. The question that begs answering is why the sudden increase in earthquakes and sinkhole development. Are these earthquakes natural or man-made?

The Sinkhole that Swallowed a Town

In August of 2012, after months of mysterious seismic activity and baffling bubbling on the Louisiana Bayou, a massive sinkhole opened near the small town of Bayou Corne, 77 miles west of New Orleans. The 1-acre sinkhole began swallowing trees whole and grew to 34 acres over the course of the next four years. State scientists blamed the Texas Brine Company for causing the sinkhole by drilling too close to the salt dome’s outer edge, resulting in a $48.1-million-dollar settlement with the town’s residents.
NASA radar imaging reviewed later shows the Bayou Corne sinkhole forming.

Real or Man-Made Earthquakes?

To analyze the problem, USGS began setting up temporary seismic monitoring stations across the region. This allows the department’s scientists to pinpoint more accurately seismic locations to determine if there is a relationship between mining, fracking and wastewater injections and human-induced earthquakes. The results were so revealing that in 2016, the USGS released its first-ever induced earthquake model which incorporated both natural occurring and man-made earthquake hazards.

Hydraulic Fracking and Wastewater Injection Risks

The USGS minimizes the effects of hydraulic fracking and instead indicates that most human-induced earthquakes result from the injection into the Earth of wastewater derived from oil and gas production mining operations.

In operations where mining activities removes gas or oil through fracking, much of the wastewater is inserted back into the same area without causing earthquakes or sinkholes. But in areas where wastewater wells are drilled to receive the byproducts of these mining operations, these fluids are inserted into areas never before drilled, causing an increase in subterranean pressure that often lead to human-induced earthquakes.

Minimizing Risks of Human-Induced Earthquakes

A study completed in September 2016 by researcher and Arizona State University geophysicist Manoochehr Shirzaei claims that there are ways to mitigate and reduce human-caused earthquakes. Scientists in the study compared a region near Timpson, Texas – site of a 4.8 magnitude earthquake – with satellite radar images from May 2007 to November 2013 and discovered an uplift in the area from the injection of wastewater into the subterranean rock. Further computer simulations, using the uplifted area, showed that the wastewater seeped away from the injection sites, boosting water pressure and eventually flowing into known earthquake fault zones.

The increased pore pressure – the buildup of water in small spaces surrounding subterranean rock – suggested by the computer model was enough to trigger earthquakes 3.5 to 4.5 kilometers beneath the Earth’s surface. The study, published in the Science journal, allows researchers to estimate increased underground pressure during wastewater injection, allowing mining companies to stop injecting more fluids into the Earth before the pressure reaches a dangerous stage.

Hydraulic Fracking, Oil and Gas Production Regulations

The Environmental Protection Agency and state environmental departments serve as the watch guard for hydraulic fracking, wastewater injection wells and oil and gas mining operations. The purpose of these organization is to regulate permitting, construction and operation, as well as the closure, of injection wells created during hydraulic fracking and gas and oil production.

In addition to these regulations, the EPA has authority to regulate hydraulic fracking that uses diesel fuels in the process. The regulations serve to protect natural water resources underground. One drawback: the EPA does not regulate gas or oil wells solely used for production.

NASA Radar Imaging Predictions

Just prior to the Bayou Corne’s sinkhole collapse in 2012, a review of NASA radar imaging showed that region of Louisiana had the potential for a sinkhole to develop. The images of the area collected by NASA’s C-20A jet and Airborne Vehicle Synthetic Aperture Radar measures and detects abnormalities in the Earth’s surface. When NASA researchers Cathleen Jones and Ron Blom – of the Jet Propulsion NASA lab in Pasadena – reviewed the images, they realized the data showed the impending collapse of the Bayou Corne sinkhole a month in advance of the event. The area first bulged upwards of 10.2 inches just prior to the collapse. ASU geophysicist Manoochehr Shirzaei used similar data to reach his conclusions for the area surrounding Timpson, Texas.

Protecting People and the Environment

History and facts show that careless mining practices can impact or destroy an area’s water quality, cause earthquakes or lead to sinkholes. With governmental regulations and continued oversight, advanced radar imagery and a willingness by mining companies to adhere to regulations, mining operations need not be detrimental to the environment, people or their homes.

China Opens the Eye to Heaven – The World’s Largest Telescope

$
0
0
China Opens the Eye to Heaven – The World’s Largest Telescope
China took a giant leap into the 21st century when it completed construction of the world’s largest telescope in the fall of 2016. An aerial view of the massive bowl-shaped dish aptly fits its given name – Tianyan – the Eye of Heaven. China spent 1.2 billion yuan, $180 million USD to build the high-tech listening device, some of which they hope to offset by tourism.

Concept to Construction

First conceived in 1993, the preliminary study project – the Knowledge Innovation Project – leaped its first hurdle in October of 2001 when it received support from the Chinese Academy of Sciences and the Ministry of Science and Technology. It would take another six years before the project received approval from the National Development and Reform Commission in 2007 when it entered the feasibility study phase. A little more than a year later, the project received the green light and the initial design phase started. Construction began in 2011, and it took a little over five and half years to build the high-tech telescope, now in operation.

Bigger Than Arecibo

Situated above traditional rural villages that dot the foothills of the Guizhou mountains in Southwest China, more than 9,000 residents were relocated from a nearly three-mile radius needed to operate the equipment without radio interference. Situated in the Dawodang depression, known for its temperate climate, water drainage, and comprised of weather-resistant rock, the surrounding karst landscape creates an ideal location for the telescope because the mountains protect against radio frequency interference and keep winds down.

Almost twice the size of the Arecibo dish in Puerto Rico, the spherical-type Tianyan dish has a 500-meter diameter, or 1600-feet diameter. This means the telescope is nearly five football fields in diameter laid end to end (or could contain 30 soccer fields). The location in the Dawodang depression allows a peak angle of 40 degrees, an opening angle of between 100 and 120 degrees, and a 300-meter illuminated surface.

Special Features

A special feature of the telescope allows the main reflector to correct for spherical abnormalities on the ground, necessary for the telescope to achieve full separation and a wide operational band without the Chinese having to install complex mechanisms. But with additional feed systems, the Eye to Heaven could achieve a southern zenith angle of 60 degrees, which would extend sky coverage past the galactic center.

Management and Staffing

Known as the five-hundred-meter aperture spherical telescope, FAST, 71 scientists, technicians and site professionals currently work for the project which began operation in September of 2016. Overseen by the National Astronomical Observatories of the Chinese Academy of Sciences, the telescope has already completed several missions since it went live in September 2016.

An Ear to Heaven

While the telescope resembles an eye, its function mimics a highly sensitive ear because it listens to radio waves in space instead of capturing light like the Hubble telescope does. It can separate and distinguish the sounds it hears from the white noise background generated by stars and pulsars in space. The radio-spectrum telescope covers a frequency range in the 70MHz to 3GHz operational bands. The movable feed cabin for the bowl-shaped telescope hangs from cables above the dish and serves as the focal point for the radio waves. Because of the more than 39,000 individual panels that make up the dish surface, the telescope can change shape to better focus the radio waves. A parallel robot and a servomechanism creates a secondary adjustable system that allows for high-precision tuning.

Pulsars, Dark Matter and Alien Contact

Scientific goals and objectives for the highly sensitive telescope are multi-pronged: search for advanced alien life – entities who might be broadcasting radio waves into space – and map portions of the Milky Way. So far, some of the goals for the FAST telescope include improving sharpness of images relative to the Arecibo telescope by mapping:
  • Pulsars
  • Supernovae
  • Black hole emissions
  • Interstellar gas
Besides further enhancing what the Arecibo telescope has found, China’s scientists plan to start new searches for:
  • Space’s first shining stars
  • Dark matter
  • Extragalactic and new galactic pulsars
  • Radio signals from extraterrestrial life in conjunction with the US-based SETI organization
  • Neutral hydrogen in ours and other galaxies. 

Tourism: An Added Benefit

Entrance to the telescope is free, but it costs 50 yuan, $7.20 USD, to catch a shuttle bus ride to the site and an additional $7.20 to visit the local astronomical museum nearby. The goal is to make China’s newest scientific development a scenic landmark; but if you plan to visit, schedule your visit accordingly, as only 2,000 people per day have access to the site to avoid interfering with scientific operations.

Surpassing Scientific Achievements

With the opening of the Eye to Heaven, China has taken massive strides in surpassing the rest of the world’s leading scientific achievements. With a growing technologically progressive workforce, advances in multiple scientific disciplines, and plans to visit the moon, China currently boasts more scientific researchers than that of the United States and is presently outspending the European nation in scientific research and development.
Viewing all 1099 articles
Browse latest View live