Quantcast
Channel: LEKULE
Viewing all 1099 articles
Browse latest View live

The​ ​Rundown​ ​of​ ​ON Semiconductor’s Smart-Passive-Sensors​ ​(SPS)​ ​for the IoT

$
0
0
Sensors ​and ​sensor ​networks ​are ​an ​integral ​part ​of ​the ​IoT ecosystem. ​Many ​markets ​are ​already ​using, ​or ​will ​soon ​be using ​IoT ​devices, ​including ​the ​industries ​of: ​automotive, transportation ​& ​marine; ​home automation ​and ​security; logistics, ​inventory ​and ​supply ​chain; ​agricultural ​and ​livestock; industrial, ​construction ​and ​power; ​and ​medical ​& ​personal healthcare.


There ​are ​many ​benefits ​of ​using ​the IoT ​such ​as ​cost ​savings, process ​optimization, ​yield ​enhancement, ​analytics, ​and ​data storage.

Smart ​Passive ​Sensors ​(SPS)

ON​ ​Semiconductor​ ​introduces​ ​the​ ​world’s​ ​first​ ​wireless​ ​sensor—using​ ​standard protocol—that​ ​is​ ​battery​ ​free​ ​and​ ​microcontroller​ ​free.



Features ​and ​Capabilities ​of ​Smart ​Passive Sensors

Due to their unique inherent features (e.g., battery free, wireless, ultra thin, and low cost to scale) these sensors allow for new sensing capabilities such as dynamically sensing data in areas including:
  • Hard-to-access: underground, inside walls, areas that are toxic or pose health dangers and/or hazards.
  • Space constrained applications: within doorways, RFID tags, and bandages.
  • Using multiple sensors for cost effectiveness: disposable products, multiple data points, increasing needs over time.

Practical ​Applications

High-Power ​Switchgear ​Equipment

To​ ​prevent​ ​catastrophic​ ​failures​ ​inside​ ​high-power​ ​switchgear​ ​boxes,​ ​it’s​ ​essential​ ​to identify​ ​high​ ​resistance​ ​points​ ​inside​ ​the​ ​equipment.​ ​And​ ​because​ ​such​ ​points​ ​can​ ​be found​ ​through​ ​temperature​ ​monitoring,​ ​the​ ​traditional​ ​method​ ​is​ ​to​ ​manually​ ​ ​monitor these​ ​high​ ​resistance​ ​points​ ​during​ ​scheduled​ ​maintenance​ ​intervals.​ ​This​ ​manual process​ ​involves​ ​extensive​ ​use​ ​of​ ​labor​ ​and​ ​yields​ ​limited​ ​data​ ​points—perhaps​ ​only​ ​one data​ ​point​ ​every​ ​year. SPS​ ​sensors​ ​can​ ​wirelessly,​ ​and​ ​continuously,​ ​monitor​ ​and​ ​analyze​ ​temperatures​ ​on busbars,​ ​circuit​ ​breaker​ ​contacts,​ ​and​ ​cable​ ​connections.

Smart ​Healthcare

Nurses are experiencing an increase in workload, which, in turn, makes it more challenging for them to effectively monitor patient status. SPS devices allow nurses to monitor their patients’ status by sending alerts for:
  • Patient is out of bed
  • Temperature change
  • IV bag is empty
  • Catheter bag  is full
  • Bed liner needs changing
     


These continuously monitoring SPS sensors provide early detection with faster resolution, and, because they are wireless, the sensors will not impede patient comfort.

Server ​Racks

As data centers become larger and larger, the difficulty and cost of monitoring their equipment increases as well. A full turnkey solution development includes the SPS sensors, and reader hardware and software. SPS temperature sensors can monitor—completely wirelessly—the air inlet temperatures inside server racks, helping to optimize cooling efforts, which, in turn, saves energy and reduces costs. Additionally, these wireless sensors can provide a means for early detection of equipment failure, and can also help track assets, thereby, lowering labor costs.

Digital ​Farming

Sensors are used on livestock for identifying specific animals and for monitoring temperatures. Animal identification is used to regulate feeding schedules and for tracking milk production factors, while an animal’s temperature can be used for early detection of illness or for detecting ovulation. SPS wireless, battery free, and maintenance free sensors offer improved accuracy, combine animal identification and temperature sensing into one device, and can be placed either on an animal’s skin or injected beneath it.

Cold ​Chain/Logistics

SPS​ ​sensors​ ​can​ ​be​ ​used​ ​to​ ​monitor​ ​temperatures​ ​of​ ​food​ ​and/or​ ​pharmaceuticals during​ ​shipment.​ ​The​ ​continuous​ ​temperature​ ​monitoring​ ​of​ ​these​ ​goods​ ​allows​ ​for immediate​ ​detection​ ​of​ ​failures,​ ​thus​ ​allowing​ ​shipments​ ​to​ ​be​ ​rejected​ ​before​ ​being unloaded​ ​and​ ​discovered​ ​by​ ​customers.

Summary

ON Semiconductor’s Smart Passive Sensors (SPS) are the world’s first battery free and microcontroller free wireless sensors. Their unique features make them ideal for applications including hard-to-access and space constrained areas. Full turnkey development solutions are available which include the SPS sensors, reader hardware, and software.
Want to learn more? Access ON Semiconductor's free webcast on-demand: Sensors, Power, and the Internet of Things: How Big Data is Influencing How We Design

Drones on Mars? NASA Projects May Soon Use Drones for Space Exploration

$
0
0
Drone technology has found its way into a variety of applications ranging from recreational use, photography, security, climate monitoring, and even humanitarian aid. Another domain that we may soon see drones being used in: space exploration.
 
NASA's JPL (Jet Propulsion Laboratory) recently announced testing of what they've termed a Mars Helicopter Scout (MHS). The scout may be included on the upcoming Mars 2020 mission, a collaborative project led by NASA with a primary mission of determining if life once existed on Mars. The idea is that a helicopter-style drone could help provide better mapping and guidance that will give mission controllers more information to help with path planning and hazard avoidance, as well as identifying points of interest.

How else could we eventually see drone technology used in space exploration? Here's a look at the MHS, other NASA drones, and what kind of challenges engineers face when trying to design a space-ready drone.

Why Send a Drone to Space?

The Mars Helicopter Scout is a payload intended to be part of the Mars 2020 mission. One of its duties, beyond scouting points of interest and potential hazards (say, storms), is to help plan travel routes for the main rover. Despite how this technology could help advance Mars exploration, this use of the MHS would mainly be a demonstration since the drone would have severely limited flight capabilities. However, this proof of concept use is important because adoption of helicopters and drones into space exploration could greatly help achieve operational objectives.
Some of the known specs of the MHS:
  • Weight: 2.2 lbs
  • Blade span (co-axial): 3.6 ft
  • Dimensions of Chasis: 5.5 in x 5.5 in x 5.5 in
  • Power: 220 W
The MHS is expected to have a range of just under 657 yards and a maximum flight altitude of 130 feet. It will carry a high resolution, downward facing camera and be designed to be able to land on the Martian surface with shock absorbing feet. The helicopter would get about three minutes of flight time every Sol (a Martian day that's equivalent to one Earth day plus 40 minutes). It would use autonomous control and communicate with the rover directly.


 
In 2016, NASA determined that an additional $15 million in funding would be required to keep the progress of the MHS on track. As recently as February 2017, the MHS was potentially on the list for exclusion from the Mars 2020 mission as there were concerns that the project may go over its mass budget.

So far, the project still seems to be in the running for going to Mars: NASA's Mars Institute is reported to be conducting UAV tests in the Canadian Arctic at the Haughton-Mars Project Research Station this fall. The site, Devon Island, is sometimes called "Mars on Earth" and can help determine whether the devices can withstand Martian-esque conditions.

The Horizon of Space Drones

The MHS isn’t the only drone project in the works, however. There is also research going into prospecting drones that may eventually be used for space mining, multi-planetary colonization, as well as path planning/hazard avoidance.

One such project is "Extreme Access Flyers" which look much closer to the typical quadcopter style drone often used here on Earth. The Swamp Works Laboratory has been working on drones ranging from five feet in diameter, to ones that are small enough to fit in your palm.

They hope the drones can eventually be used for everything from imaging to sample collection, though there is particular interest in resource gathering.

The lab has produced multiple prototypes over the years. One of the major differentiators of these vehicles from our earthly drones is the lack of rotors. Each is designed to utilize whatever gas or even water vapor is available to propel itself, depending on whether it's located on Mars or an asteroid.


Space Environment Challenges

There are certainly unique considerations to take into account when designing a space exploration drone. Particularly relevant to space drone development is the fact that the atmosphere on other planets or celestial objects can be much thinner than what's found on Earth—or non-existent. Mars, for example, has 1% of the atmospheric density of Earth. This is important when determining the mass of the drones, since they will not be able to get the lift required to fly if they're too heavy. On the other hand, if they're too light, they might be difficult to control.
Control is also another challenge. Such drones and UAVs would probably need to be fairly autonomous, since real-time control would not be possible like it is on Earth. It takes approximately 20 hours to send 250 megabits of data to Earth, so live video streams are certainly out of the question.
Finally, there are the daunting challenges of battery capacity and charging. There are a few ways that spacecraft can be powered—the Voyager probes, for example, use RTGs or radioisotope thermoelectric generators. But for small drones that aren't currently being recruited for deep-space missions, the most practical method is likely a solar battery power. Trying to find the balance between mass, battery capacity, and charging time is another element that will need to be considered.

The MHS is an interesting step for UAVs and drones in space exploration. If it succeeds, we may see more missions using drones.

SiFive Building RISC-V Ecosystem, One Partnership at a Time

$
0
0
The catalog of IPs includes security cores, Logic NVM technology, embedded analytics, and multicore debugging toolsets.
 
SiFive, the originator of the first open-source chip platform, is amassing tools and cores necessary to build custom chips based on the free and open RISC-V instruction set architecture (ISA). A number of companies are making their IPs available through SiFive's DesignShare model that is aimed at significantly reducing the upfront engineering costs needed to develop a custom chip.

The goal—as SiFive's new CEO Naveed Sherwani puts it—is to create a level playing field for anyone who wants to develop a custom chip. Here is how SiFive is streamlining the development of custom silicon by signing up one partnership at a time.

Rambus Offering Security Cores

As part of SiFive's DesignShare model, which facilitates a catalog of IPs at lower costs, Rambus will make available cryptographic cores, hardware root-of-trust, key provisioning, and other security-related components.
That will allow chip developers to easily embed security cores into SiFive's Freedom platforms for developing custom system-on-chips (SoCs). And it's going to be a critical feature for securing the Internet of Things (IoT) end-points and in-field device connections.

eMemory's IP for Logic NVM

SiFive has also added to its catalog of low-cost IPs the logic-based non-volatile memory (Logic NVM) technology from eMemory Technology Inc. It's an embedded memory solution that eMemory licenses to semiconductor foundries, IDMs, and fabless design houses.


Silicon IPs for OTP, MTP and EEPROM memory blocks. Image courtesy of eMemory Technology Inc.

The company claims that its embedded memory IP has been employed in more than 27 billion chips used in applications like consumer, industrial, and automotive. Its proprietary silicon IP technologies include NeoBit, NeoFuse, NeoMTP, NeoFlash, and NeoEE.

UltraSoC's Embedded Analytics IP

SiFive's Freedom design platform, which is based on open source RISC-V processor cores, is also amassing a variety of tools and interfaces. Take, for instance, UltraSoC, the supplier of vendor-neutral on-chip debug and analytics tools. UltraSoC is making available its embedded analytics IP for SiFive's DesignShare initiative.

Chip designers can use this debug and trace technology to gain an intimate understanding of the interactions between on-chip processor blocks, custom logic, and system software. Trace is a basic requirement for developers working on any processor architecture; it allows them to view the behavior of their programs in detail and subsequently isolate bugs and identify areas for improvement.

Debugging Toolset from Lauterback

Lauterbach, a supplier of microprocessor development tools, has also announced support for SiFive's RISC-V cores. Lauterbach’s TRACE32 toolset will provide debug capabilities for SiFive’s E31 and E51 RISC-V Core IP, which is based on the free and open RISC-V instruction set.


The TRACE32 toolset is planning support for debug interfaces like USB. Image courtesy of Lauterbach GmbH.

The toolset provides multicore debugging on individual hardware threads of SiFive cores, right from the reset vector to analyzing startup codes and other key functions. 



What's your experience with RISC-V? Anyone attending the Western Digital workshop this week? Share your experiences below.

Teardown Tuesday: Solar-Powered Floating Water Fountain

$
0
0
In this teardown, we examine the innards of a solar-powered floating water fountain by Feelle.

First Impressions

Feelle'sSolar Powered Floating Water Fountain is a small, yet impressive, water fountain intended for a birdbath, pond, or pool and garden decoration. The diameter of the solar panel's disc measures about 6.3 inches; and while the thickness of the disc itself is about a ½ inch, the overall height of this little fountain is approximately 4 inches (from the bottom of the pump to the top of the fountain).
Other specifications, according to the included product specification sheet, include:
  • Solar panel: 7V/1.4W
  • Brushless (AKA BLDC) pump inlet: DC 4.5V-10V.
  • Maximum quantity of flow: 150L/h.
  • Maximum water height: 30-45cm (11.8-17.7 inches).
  • Startup delay: < 3 seconds
  • Restart: When sunlight is removed, the unit will restart within 3 seconds once sunlight is restored.
Before beginning the teardown process, I felt inclined to test (play with) this little fountain. After placing the fountain in a large bowl filled with water, once the sun found the solar panel the fountain began to work seemingly perfectly. Unsurprisingly, the angle of the sun's light onto the solar panel is directly proportional to the water's height. So obviously the fountain would work best on a cloudless day when the sun is directly overhead.


Figure 1. Feelle's solar powered water fountain. Image courtesy of Amazon.

No More Testing/Playing Around...Let the Teardown Begin

In reality, there's not too much to this design, at least that I could get my hands on. To ensure long-term reliability and to meet any associated safety standards, all the electronics (sans the solar panel array) are encased in a very hard potting compound/epoxy. This includes the pump and any associated electronics, and also the power conversion electronics (see image below).


Figure 2. All the electronics are encased in a potting compound.

I attempted to scrape away the potting material, but it was like scraping away concrete...not gonna happen! Given the total and complete encasement of the electronics, it doesn't surprise me, at all, that the IP rating of this device is IP68 (see image below).


Figure 3. The electronics being encased in a potting compound allows the device to be rated to IP68 (waterproof).

The electrical connection between the electronics and the solar panel is by pins/wires. Figure 4 below shows these connections connected to the solar panel, while figure 5 shows how these electrical connections are routed to the electronics. Take note that the electronics box was sealed to the bottom of the solar panel disc using some type of soft silicone—no potting compound was used here—so it was easy to cut through.


Figure 4. The area in the red box is where the solar panel array electrically connects to the electronics.


Figure 5. The electrical connections between the solar panel and the electronics box.

The only internal component that I could physically touch was the water pump's impeller. This inexpensive-looking impeller, along with its attached permanent magnet, is indeed the component that forces the water up and out of the fountain. As can be seen in the image below, this impeller simply slides inside the cyclical opening. There are no wires or any other electrical components attached to it.


Figure 6. The pump's impeller slides into the BLDC pump housing.

Conclusion

Although there's not too much to "tear down" with this solar-powered water fountain, it is interesting to see how the device is constructed using a BLDC motor along with potting compound to meet the IP68 rating, thus, making the device 100% waterproof. For a little showpiece and/or entertainment in a birdbath pond or a swimming pool, one can hardly go wrong with this fountain and its low price of about $14.
It's interesting to come across an uncrackable device that demonstrates what's required to make electronics waterproof. If anyone's had any luck cracking open potting compound like this, let us know in the comments.

How To Make A Steampunk Desk Lamp

Choosing the Most Suitable MEMS Accelerometer for Your Application.

$
0
0
Choosing the most suitable accelerometer for your application can be difficult, as data sheets from various manufacturers can differ significantly, leading to confusion as to what are the most critical specifications. In part 2 of this article, we will focus on key specifications and features in the context of wearable devices, condition monitoring, and IoT applications.

Wearable Devices

Key criteria: Low power consumption, small size, integrated features to enhance power saving, and usability.

The key specification for accelerometers used in battery-powered, wearable applications is ultra-low power consumption, typically in the µA range, to ensure that battery life is prolonged for as long as possible. Other key criteria are size and integrated features, such as spare ADC channels and a deep FIFO to help with power management and functionality in the end application. For these reasons, MEMS accelerometers are typically used in wearable applications. Table 1 shows some vital signs monitoring (VSM) applications and their corresponding settings by context. Accelerometers used in wearable applications typically classify motion; provide freefall detection; measure the presence or absence of motion to provide system power up, down, or sleep; and help with data fusion for ECG and other VSM measurements. The same accelerometers are also used in wireless sensor networks and IoT applications due to their ultra-low power consumption.

100>50>1>1>0>1>1>1>1>
Table 1. Motion Sensing Requirements for VSM Wearable Applications
 PedometerFallOptical Heart RateTap (SW)SleepMotion SwitchECGADXL362/ADXL363
g setting2 g8 g4 g to 8 g8 g2 g2 g4 g to 8 g2 g to 8 g
ODR (Hz)100400<50 td="">40012.56<100 td="">400
Power Consumption1.8 µA3 µA 3 µA1.5 µA0.3 µA 10 nA to 3 µA
FIFO (Sample Sets or Time)150deeper is better1 secdeeper is better20no1 sec512 sec or 13 sec
ADCnonoyesnononoyesno/yes
Noise (mg/√Hz)<1 td=""><1 td=""><1 td=""><1 td=""><0 .1="" td=""><1 td=""><1 td="">175 µg to 550 µg 
Data Collection24/724/7sporatic24/7 on motion continuous during exerciseall
Required FeatureRSS, 8-bitTrigger mode FIFO Trigger mode FIFOLow noiseMCU off All except RSS

When selecting accelerometers for ultra-low power applications, it is imperative to observe the functionality of the sensor at the power consumption levels stated in the datasheet. A key thing to observe is whether the bandwidth and sample rate reduce to levels where usable acceleration data cannot be measured. Some competitor parts turn themselves off and wake up every second to maintain low power consumption and, in doing so, will miss critical acceleration data due to the reduced effective sampling rate. In order to measure the range of real-time human motion, power consumption has to be increased significantly. The ADXL362 and ADXL363 do not alias input signals by undersampling; they sample the full bandwidth of the sensor at all data rates. Power consumption scales dynamically with sample rate, as shown in Figure 1. Of note is the fact that these parts can sample up to 400 Hz with current consumption at only 3 µA. These higher data rates enable extra functionality in wearable device interfaces such as tap/double tap detection. The sampling rate can be reduced to 6 Hz to allow a device to start when picked up or when motion is detected, giving an average current consumption of 270 nA. This also makes the ADXL362 and ADXL363 attractive for implantable applications where batteries can’t be replaced easily.


Figure 1. ADXL362 supply current as a function of the output data rate.

In some applications, it is enough for the accelerometer to only poll acceleration once or a few times per second. For these applications, the ADXL362 and ADXL363 provide a wake-up mode that consumes only 270 nA. The ADXL363 combines a 3-axis MEMS accelerometer, a temperature sensor (typical scale factor of 0.065°C), and an onboard ADC input for synchronous conversion of an external signal in a small and thin 3 mm × 3.25 mm × 1.06 mm package. Acceleration and temperature data can be stored in a 512-sample multimode FIFO buffer, allowing up to 13 sec of data to be stored.
Analog Devices developed a VSM watch, for demonstration purposes only, shown in Figure 2, to showcase the capabilities of ultra-low power parts such as the ADXL362 in battery and space constrained applications.


Figure 2. VSM watch incorporating a range of Analog Devices parts to highlight ultra-low, small, lightweight products.

The ADXL362 is used to track motion and also profile motion to help remove unwanted artifacts from other measurements.

Condition-Based Monitoring (CBM)

Key criteria: Low noise, wide bandwidth, signal processing, g-range, and low power.

CBM involves monitoring parameters such as vibration in machinery with the aim of identifying and indicating a potential occurrence of a fault. CBM is a major component of predictive maintenance and its techniques are typically used on rotating machinery such as turbines, fans, pumps, and motors. The key criteria for CBM accelerometers are low noise and wide bandwidth. At the time of writing this article, very few competitors offer MEMS accelerometers with bandwidths above 3.3 kHz, with some specialist manufacturers offering bandwidths up to 7 kHz.
With the advancement of Industrial IoT, there is an emphasis on reducing cabling and utilizing wireless, ultra low power technologies. This places MEMS accelerometers ahead of piezoelectric accelerometers in terms of size, weight, power consumption, and potential for integrated intelligent features. The most commonly used sensors for CBM are piezoelectric accelerometers due to their good linearity, SNR, high temperature operation, and wide bandwidths from 3 Hz to 30 kHz being typical, and up to several hundred kHz in some cases. However, piezoelectric accelerometers have poor performance around dc and, as Figure 3 shows, quite a lot of faults can occur at lower frequencies down toward dc, especially in wind turbines and similar low RPM applications. Piezoelectric sensors do not scale up to large volume manufacturing as well as MEMS due to their mechanical nature, and are also more expensive and less versatile in terms of interface and power supply.
MEMS capacitive accelerometers offer higher levels of integration and functionality in terms of features such as self-test, peak acceleration, spectral alarms, etc.; FFTs, and data storage, and are shock tolerant up to 10000 g, have a dc response, and are smaller and lighter. The ADXL354/ADXL355 and ADXL356/ADXL357 are well suited to condition monitoring applications based on their ultralow noise and stability over temperature, but ultimately their bandwidth precludes them from performing more in-depth diagnostic analysis. However, even with the limited bandwidth range, these accelerometers can provide important measurements; for example, in wind turbine condition monitoring where equipment rotates at very low speeds. In this case, a response down to dc is required.


Figure 3. Rotation equipment fault vibration artifacts.

The new ADXL100x family of single-axis accelerometers are optimized for industrial condition monitoring and offer wide measurement bandwidths up to 50 kHz, g-ranges up to ±100 g, and ultralow noise performance—putting them on a par with piezoelectric accelerometers in terms of performance. A more detailed discussion on Analog Devices MEMS capacitive accelerometers vs. piezoelectric accelerometers can be found in this article: MEMS Accelerometer Performance Comes of Age.
The ADXL1001/ADXL1002 frequency response is shown in Figure 4. The majority of faults occurring in rotating machinery such as damaged sleeve bearings, misalignment, unbalance, rubbing, looseness, gearing faults, bearing wear, and cavitation all occur in the measurement range of the ADXL100x family of condition monitoring accelerometers.


Figure 4. Frequency response of ADXL1001/ADXL1002, high frequency (>5 kHz) vibration response; a laser vibrometer controller references the ADXL1002 package used for accuracy.

Piezoelectric accelerometers typically do not integrate intelligent features whereas MEMS capacitive accelerometers like the ADXL100x family offer built-in overrange detection circuitry, which provides an alert to indicate a significant overrange event occurred that is greater than approximately 2× the specified g-range. This is a critical function in developing an intelligent measuring and monitoring system. The ADXL100x applies some intelligent disabling of the internal clock to protect the sensor element during continuous overrange events, such as those that would occur if a motor had a fault. This relieves the burden on the host processor and can add intelligence to a sensor node—both key criteria for condition monitoring and Industrial IoT solutions.
MEMS capacitive accelerometers have taken a massive leap forward in performance, so much so that the new ADXL100x family are competing and winning sockets previously dominated by piezoelectric sensors. The ADXL35x family offers industry best, ultralow noise performance and it also displaces sensors in CBM applications. New solutions and approaches to CBM are converging along with IoT architectures into better sensing, connectivity, and storage and analysis systems. Analog Devices’ latest accelerometers are enabling more intelligent monitoring at the edge node, helping factory managers to achieve fully integrated vibration monitoring and analysis systems.
Further complimenting this range of MEMS accelerometers is the first generation of subsystems for CBM, ADIS16227 and ADIS16228 semiautonomous, fully integrated, wide bandwidth vibration analysis systems as shown in Figure 5, with features such as programmable alarms over six spectral bands, 2-level settings for warning and fault definition, adjustable response delay to reduce false alarms, and internal self-test with status flags. Frequency domain processing includes a 512-point, real-valued FFT for each axis, along with FFT averaging, which reduces the noise floor variation for finer resolution. The ADIS16227 and ADIS16228 fully integrated vibration analysis system can reduce design times, reduce costs, minimize processor requirements, and reduce space constraints, making them ideal candidates for CBM applications.


Figure 5. Digital triaxial vibration sensor with FFT analysis and storage.

Internet of Things/Wireless Sensor Networks

Key criteria: Power consumption, integrated features to allow for intelligent power saving and measurements, small size, deep FIFO, and suitable bandwidth.

The promise of the Internet of Things is well understood throughout industry. In order to deliver on this promise, millions of sensors will have to be deployed over the coming years. The vast majority of these sensors will be placed in difficult to access or space constrained locations such as rooftops, on top of street lights, tower masts, bridges, inside heavy machinery, and so on, enabling the concept of smart cities, smart agriculture, smart buildings, etc. Due to these constraints, it is likely that a high proportion of these sensors will require wireless communications, as well as battery power and perhaps some form of energy harvesting.
The trend in IoT applications is to minimize data transmitted wirelessly to the cloud or local server for storage and analytics, as existing methods use excess bandwidth and are expensive. Intelligent processing at the sensor node can distinguish between nonuseful and useful data, minimizing the requirement to transmit large amounts of data, thus reducing bandwidth and costs. This places a requirement on the sensors to contain intelligent features while maintaining ultra low power consumption. A standard IoT signal chain is shown in Figure 6. Analog Devices provides solutions for all blocks besides the gateway. Note that not all solutions require wireless connectivity, for a vast amount of applications wired solutions are still necessary, be it RS-485, 4 mA to 20 mA, or Industrial Ethernet, etc.
By having some intelligence at the node, it is possible to only transmit useful data along the signal chain—saving power and bandwidth. In CBM, the amount of processing done locally at the sensor node will depend on several factors such as the cost and complexity of the machine vs. the cost of the condition monitoring system. Data transmitted can range from a simple out of range alarm to streams of data. Standards such as ISO 10816 exist to specify warning conditions for a given size machine running at a particular RPM rate outputting an alarm signal when the vibration velocity exceeds preset thresholds. ISO 10816 is intended to optimize the useful life of the system being measured and its rolling element bearings and therefore it minimizes the amount of data for transmission, thereby better supporting deployment in WSN architectures.
The requirements for an accelerometer used in an ISO 10816 application are a g-range of 50 g or less and low noise at low frequencies as acceleration data is periodically integrated to get a single velocity point in mm/sec rms. When accelerometer data containing low frequency noise is integrated, the error can increase linearly in the velocity output. ISO standards specify a 1 Hz to 1 kHz measurement range, but users would like to integrate as low as 0.1 Hz. Traditionally, this has been limited by the high levels of noise   at low frequencies in charge coupled piezoelectric accelerometers, but Analog Devices next-generation accelerometers maintain the noise floor down to dc, limited only by the 1/f noise corner of the signal conditioning electronics that can be minimized to 0.01 Hz with careful design. MEMS accelerometers can be used in economical CBM applications for lower cost equipment or can be integrated into embedded solutions due to their smaller size and lower cost compared to piezoelectric sensors.


Figure 6. Edge sensor node solutions provided by Analog Devices.

Analog Devices has a wide range of accelerometers that are ideal for use in intelligent sensor nodes that require ultra low power, including as many features as possible to prolong battery life and help reduce bandwidth usage and thus costs. Some of the key criteria for IoT sensor nodes are low power consumption (ADXL362, ADXL363), and having a rich feature set to allow energy management and detection of specific data such as over threshold activity, spectral profile alarms, peak acceleration values, and prolonged activity or inactivity (ADXL372, ADXL375).
All of these accelerometers can keep the entire system powered off while storing acceleration data in the FIFO and looking for an activity event. When the impact event occurs, data that was collected prior to the event is frozen in the FIFO. Without a FIFO, capturing samples prior to an event would require continuous sampling and processing of acceleration signals by the processor, which significantly decreases battery life. The ADXL362 and ADXL363 FIFO can store over 13 sec of data, providing a clear picture of events prior to an activity trigger. Ultra low power consumption is maintained by not utilizing power duty cycling, but rather employing a full bandwidth architecture at all data rates, which prevents aliasing of input signals.

Asset Health Monitoring

Key criteria: Power consumption, integrated features to allow for intelligent power saving and measurements, small size, deep FIFO, and suitable bandwidth.

Asset health monitoring (AHM) typically involves monitoring of a high-value asset over a period of time, whether it be static or in transit. These assets could be goods inside shipping containers, remote pipelines, civilians, soldiers, high-density batteries, etc., that are susceptible to impact or shock events. The Internet of Things provides an ideal infrastructure for reporting such events that could affect an asset’s function or safety. The key criteria for a sensor used for AHM is the ability to measure high g shock, relevant to the asset, and impact events while consuming very low power. When embedding such sensors in battery operated or portable applications, other key sensor specifications to consider include size, oversampling, and antialiasing features to accurately process high-frequency content, as well as intelligent features to extend battery life by maximizing host processor sleep time, and allowing interrupt driven algorithms for detecting and capturing shock profiles.
The ADXL372 micropower, ±200 g MEMS accelerometer targets the emerging asset health market space for intelligent IoT edge nodes. This part contains several unique features developed specifically for the AHM market to simplify the system design and provide system-level power savings. High g events such as shock or impact are often closely associated with acceleration content over a wide range of frequencies. Wide bandwidth is required to accurately capture these events, as measuring with insufficient bandwidth will effectively reduce the magnitude of the recorded event, leading to inaccuracies. This is a key parameter to observe in data sheets. Some parts don’t satisfy the Nyquist criteria for sampling rates. The ADXL375 and ADXL372 provide the option of capturing the entire shock profile for further analysis with no intervention from a host processor. This is achieved using the shock interrupt registers in combination with the accelerometer’s internal FIFO. Figure 7 shows the importance of having a sufficient FIFO in order to determine the entire shock profile prior to the trigger event. With an insufficient FIFO, it would not be possible to record and maintain the shock event for further analysis.


Figure 7. Accurately capturing shock profile.

The ADXL372 can operate with bandwidths of up to 3200 Hz at extremely low power levels. A steep filter roll-off is also useful for effective suppression of out-of-band content, and the ADXL372 incorporates a four-pole, low-pass antialiasing filter for this purpose. Without antialias filtering, any input signals whose frequencies exceeded the output data rate/2 could fold into the measurement bandwidth of interest, leading to inaccurate measurements. This four-pole, low-pass filter has a user selectable filter bandwidth to allow maximum flexibility in a user’s application.
Instant-on impact detection is a feature that allows the user to configure the ADXL372 to capture impact events above a certain threshold while being in an ultra low power mode. As shown in Figure 8, after an impact event occurs, the accelerometer goes into full measurement mode in order to accurately capture the impact profile.


Figure 8. Instant-on mode using default threshold.

Some applications require that only the peak acceleration sample from an impact event be recorded, as this alone can provide sufficient information. The ADXL372 FIFO has the capability to store peak acceleration samples for each axis. The longest time duration that can be stored in the FIFO is 1.28 sec (512 single-axis samples at 400 Hz ODR). 170× 3-axis samples at 3200 Hz ODR corresponds to a 50 ms time window and are sufficient to capture a typical impact waveform. Applications that do not require the full event profile can greatly increase the time between FIFO reads by storing only peak acceleration information, providing further power savings. 512 FIFO samples can be allotted in several ways, including the following:
  • 170 sample sets of concurrent 3-axis data
  • u  256 sample sets of concurrent 2-axis data (user selectable)
  • u  512 sample sets of single-axis data
  • u  170 sets of impact event peak (x, y, z)
Appropriate use of the FIFO enables system-level power savings by enabling the host processor to sleep for extended periods while the accelerometer autonomously collects data. Alternatively, using the FIFO to collect data can unburden the host processor while it tends to other tasks.
There are several other accelerometers on the market with similar high g performance, but they are not suitable for AHM/SHM IoT edge node applications due to their narrow bandwidth and higher power consumption. In cases where a low power mode is offered, it typically is at lower bandwidths where accurate measurements can’t be made. The ADXL372 truly creates a stick and forget approach to AHM/SHM, making end customers reconsider the potential asset classes where this would be viable.

Conclusion

Analog Devices provides an extremely broad range of accelerometers to suit a multitude of applications, some of which were not focused on in this article, like dead reckoning, AHRS, inertial measurements, automotive stabilization and safety, and medical alignment. Our next-generation MEMS capacitive accelerometers are ideally suited to applications demanding low noise, low power, high stability, and performance over temperature; minimal compensation; and integrated intelligent features to improve overall system performance and ease design complexity. Analog Devices provides all the relevant data sheet information to help you choose the most suitable part for your application. Visit analog.com/MEMS for more details on Analog Devices' line of MEMS accelerometers.

References

Broeders, Jan-Hein. “Transition from Wearable to Medical Devices.” Analog Devices, Inc., 2017.
Scannel, Bob. “Embedded Intelligence and Communication Enabling Reliable and Continuous Vibration Monitoring.” (PDF) Analog Devices, Inc., 2015.

Spence, Ed. “What You Need to Know About MEMS Accelerometers for Condition Monitoring.” Analog Devices, Inc., 2016.

High-Accuracy Temperature Sensing: New Digital Temperature Sensors from Sensirion

$
0
0
Sensirion asserts that their new series of digital temperature sensors are more accurate, have increased intelligence, and are more reliable than their predecessors.
Sensirion recently introduced their new series of highly accurate digital temperature sensors. This new series, the STS3x, uses Sensirion's "industry-proven"CMOSens® technology to achieve increased intelligence, better reliability, and improved accuracy. There are three flavors in this new series, all of which use the same datasheet:
  • STS30: ±0.2°C accuracy at temperature range of 0°C to 65°C
  • STS31: ±0.2°C accuracy at temperature range of 0°C to 90°C
  • STS35: ±0.1°C accuracy at temperature range of 20°C to 60°C


Figure 1. STS3x series of high-accuracy temperature sensors. Image taken from the datasheet (PDF).

A Wide Operating Temperature Range, but Not Always with High-Accuracy

Although each of the three ICs within this series can function in the wide operating temperature range of -40°C to 125°C, which, by the way, is quite impressive as this range is considered the automotive-grade, the touted high-accuracy measurements do not extend to the entirety of this range. Rather, the accuracies (as can be seen in the images below) begin to change noticeably outside a narrower temperature range.


Figure 2. Accuracy vs. temperature of the STS30 and STS31. Image courtesy of Sensirion.
 
Figure 3. Accuracy vs. temperature of the STS35. Image courtesy of Sensirion

Designed for Mass Production

The datasheet states (see image below) that this IC—specifically the CMOSens® technology that it uses—is "designed for mass production." Umm, shouldn't this go without saying? I have seen datasheets state "not recommended for new designs," but I don't ever recall seeing one that specifies that the IC, or its underlying technology, is designed for mass production. This benefit makes me question if Sensirion has other ICs that are in fact not designed for mass production. It's all a bit puzzling. Have you seen other IC datasheets call this out? If so, please let us know.
 
Figure 4. Somewhat puzzling that the IC's core technology is called out as being "designed for mass production" (from the datasheet).

Comes in a Small Package and Has Alert and Reset Pins

This IC is available only in an 8-pin dual-flat no-leads (DFN) package, measuring a scant 2.5mm × 2.5mm with a tiny thickness of only 0.9mm. There is also, in addition to the eight pins, a thermal pad that is connected to ground (see image below).
Although this device uses an I2C interface with communication speeds up to 1 MHz, two of the eight pins are dedicated to Alert (pin3) and nReset (pin 6). The Alert pin is intended to be connected, if desired, to an interrupt pin on a microcontroller. According to section 3.5 (ALERT Pin) of the datasheet, "The output of the pin depends on the value of the temperature reading relative to programmable limits," and its function is "explained in a separate application note." However, and this is very perplexing, there is no additional information related to the "separate application note." My suspicion is that Sensirion intended to include a link to this app note but then simply forgot to include it; perhaps additional information will be provided in the datasheet's next revision.
The nReset pin may be used to generate a system reset of the IC. And while a reset may also be externally generated by issuing a command (referred to as a soft reset), to achieve a full reset it is recommended to use the nReset pin (or of course you can also cycle power). On the other hand, if this pin is not to be used then it is recommended that it is either left floating or tied to VDD via a series resistor of value ≥2 kΩ. The datasheet goes on to say that "the nRESET pin is internally connected to VDD with a pull up resistor of 50 kΩ." So... why are there two options for how to configure the nReset pin when it's not being used? Why not just recommend that it should be left floating (since it's already pulled high internally)? And, if there are indeed technical reasons for when the pin should be externally pulled up to VDD (for better noise immunity, as an example) then let us know what those technical reasons are.

Figure 5. Pinout and pin descriptions of the STS3x series. Image courtesy of datasheet (PDF).

Packaging and Land Pattern Details

Sensirion provides detailed information on both the package outline and the land pattern. In fact, it is recommended, because the IC's pad pitch is just 0.5 mm, that only one solder mask opening should be used for all four pads on one side. It is also suggested, when using solder paste printing, that a laser-cut stainless steel stencil with trapezoidal walls and with a stencil thickness of 0.1 or 0.125 mm should be used. See the images below. This information should be quite helpful for a PCB layout design team.


Figure 6. Package information, from the datasheet (PDF).
 


Figure 7. Land pattern information, from the datasheet (PDF).


Have you had a chance to use any of the temperature sensors in this new series? If so, leave a comment and tell us about your experiences.

Engineer Spotlight: Shafy Eltoukhy, Head of SiFive’s DesignShare Program

$
0
0
AAC's Chantelle Dubois had a chance to speak one-on-one with the head of SiFive's DesignShare program, Shafy Eltoukhy, to learn more about the DesignShare program, the benefits to IP sharing, and what he believes will be important for the future of SoC design.
Shafy Eltoukhy is a seasoned engineer, with over 35 years of experience in the semiconductor world. He began his career shortly after graduating with a PhD in Device Physics from the University of Waterloo in 1982 to work with Intel Corp as a senior device engineer. Shortly after, he parted ways in order to co-found Actel, an FPGA company now known as Microsemi.
Since then, Eltoukhy has been involved with establishing other silicon and semiconductor companies, including Lightspeed Logic and Open-Silicon Corp, both of which specialize in ASIC design and production. In 2014, he returned to Microsemi as Vice President of the Analog Mixed Signal unit before eventually joining the SiFive team, where he now leads the DesignShare program—an initiative that intends to bring down the barriers to innovation in System-on-a-Chip (SoC) design.

Dr. Shafy Eltoukhy, Head of SiFive's DesignShare Program. Image courtesy of Dr. Shafy Eltoukhy.

“By training, I’m a start-up guy—this is my fourth startup company”, Eltoukhy told AAC during the interview.
DesignShare has so far announced partnerships with Think Silicon, Rambus, UltraSoC, Flex Logix, ememory, and Analog Bits. More are expected to be announced continuously in the near future.
AAC writer, Chantelle Dubois, had a chance to speak one-on-one with Eltoukhy to learn more about the DesignShare program, the benefits to IP sharing, and what he believes will be important for the future of SoC design.


Chantelle Dubois (AAC):How did you get involved with SiFive?
Shafy Eltoukhy (SE): The CEO of SiFive called me and told me that he had a great opportunity. I was very excited to get involved with SiFive because I had been in the ASIC business for a while and I know about all the pains that ASIC customers and designers go through, the time it takes to get products to markets, and so on. I saw SiFive as an opportunity to fill in the gaps and resolve the issues that designers and customers have been facing over the last 25 years.

AAC: How did the concept of the DesignShare Program come about and what problem is it addressing when it comes to SoC design?
SE: If you look at any custom design, especially if you are building an SoC, there are few components that you need to have before you start actually building the chip. Of course, you may have your own secret sauce, or your own recipe, or your own IP that you are trying to build, but you can not build a chip by itself based on your idea alone. You really need to be able to use third-party IPs with your own IP so that you can differentiate yourself—a large portion of the costs of building an SoC are the third party IPs.
For example, you will need a processor, you will need I/Os, high-speed interfaces for DRAM, and so on. By the time you add the costs of all the IPs up, you may end up with a few million dollars just to license IPs from third parties. That becomes a barrier for a lot of good ideas from startup companies or from two guys in a garage who want to build their own idea but can not really afford the IP payments up front. RISC-V is a solution for one of the big components involved, which is the processor, which brings down the costs by a third. So, the DesignShare program addresses the problem involved with accessing third-party IPs.

SiFive uses RISC-V in its SoC cores. Image courtesy of SiFive.

AAC: How does DesignShare actually work?
SE: The idea is that we are going to be building the equivalent of an app store, but for IPs. IP vendors can have their IP added to the program, and it will be protected by SiFive since we will be the ones building the chip for the customer. We will provide the customer the IP models and so on, but the actual physical IP never leaves SiFive.
When we have our system up and running, the customer will be able to browse the IP library with an explanation of each IP with their specs, choose their processor, then the IPs needed for the application, and then can add their own IP. The customer then lets us know how many prototypes they need, and will pay SiFive some money for making the prototypes, but the money is not for the IPs; the money is for sharing the costs of mask and fabrication, so it’s very small compared to how much the customer would pay for doing it on their own.
After we deliver the prototype, the customer can then go to production, and before they sign the deal with us they will know exactly how much it will cost, including the IPs. They do not pay for the IPs until production.

AAC: What makes the DesignShare Program attractive to IP vendors?
SE: With the DesignShare program, the IP vendor does not have to deal with multiple customers at the same time: no sales support, no legal documents that you have to write with every customer, and the IP is secure with SiFive. That is why it’s attractive to IP vendors—because this opens up a new channel for them that they did not get exposed to before. They only deal with us as an aggregator, and there’s no overhead involved with sales, legal documentation, finding customers, and so on.
At the same time, we also lower the barrier for our customers to create prototypes. It isn’t until the customer’s idea takes off that they will pay for the IPs, and then we pay the IP vendors. It’s a win-win for SiFive, the IP vendors, and the customer.

AAC: How do you determine which IP vendors you partner with? 
SE: If you look at any SoC in general, there is a foundation that everybody needs. For example, I/Os, memory—there is a basic foundation what we call Foundation IP that you have to have.
So, first we try to find a variety of IP vendors that have these basic IPs because this will be the foundation of any SoC. After that, we as a company will be defining vertical markets that we believe will be in high demand for a lot of new startups and small companies. These will be associated with the IoT, edge computing, and machine learning. Then we will be creating a library of IPs that will serve this market.
Currently, we are exploring how many IP vendors are interested and we are receiving great responses. We launched DesignShare two months ago and we have already announced eight vendors, which is almost one per week. Next year, we will probably continue at the same rate. So, we are going to be driving the IP library based on the vertical markets that we will be defining.

Flex Logix FPGA IP is part of the DesignShare program. Image courtesy of Flex Logix.

AAC: What's the demand like for support creating custom SoC design?
SE: We already have customers calling us. For example, we made an announcement with an embedded FPGA IP from Flex Logix, and a customer contacted us who wanted to combine an FPGA with a RISC-V processor—so we are exploring with them how many gates they need for the FPGA, the other IPs they’ll need around it, and so on.
The new concept of DesignShare plus open source for the core will lower the barrier for a lot of people who had no real dream of building such a system at a low cost compared to what they have done in the past. As you know, custom design has been going down over the years and the main reason is that nobody can afford it.

AAC: Having been involved in the industry for a while, are there any trends in the silicon world that you think will be important to watch out for in the near future?
SE: I think in the silicon world—I discussed this as a part of why we are going to be choosing certain vertical markets—there is a push in IoT for computing to be near the sensing part of the application. Right now, in order to do any computing, we still have to go to the cloud and datacenters. But there are a lot of applications that can not afford to send data to the cloud because maybe the Wi-Fi is down or the cellular service is down.
By moving the computing closer to sensors, you can avoid overwhelming the cloud or datacenters that maybe does not make sense to use in some cases. For example, Tesla cars have over 250 computing elements—imagine that you did not have edge computing or computing at every node. All these sensors would be sending data to the main processors in the car. In normal operation, nothing happens—but if all of the sensors keep sending data to the processor, it could overwhelm the processor.
If you start moving computing near the sensor, that means you only send the critical data to the processor. So you make intelligent decisions at the edge of the device, or near the sensor part of it, and that’s why you need some kind of computing element near the sensor. RISC-V could be one of the controllers with the IPs from DesignShare. So, in the future, computing is going to move closer to the edge.

Edge computing is becoming more important in applications such as autonomous driving. Image courtesy of Shutterstock.

AAC: Is there anything else you’d like our readers to know that maybe we haven’t discussed yet?
SE: At SiFive, we are trying to lower the barriers for innovation so that more people can innovate. We want to open innovation to the world. If you look at how many people can design, it’s only a handful of places on Earth that really have the capability or talent, so by making a lot of these IPs available for low cost or free, it will open up innovation to more people. SiFive will enable this.


AAC: Thank you for your time. Shafy. It was a pleasure to speak with you.

3D Metal Printing: The Next Phase of Aircraft Manufacturing

$
0
0
3D printing technology has come a long way from being an experimental tool used to create roughly textured objects from plastic resins. Here's a look at how 3D printing has made it into industrial contexts, specifically aerospace.
3D printing has been embraced by many, including hobbyists and those fabricating their own products. Until recently, however, it's been unnattractive to indurstry professionals. The improvement in 3D printing technology, access to more diverse materials, and precision manufacturing, however, have made it an ideal tool for 3D printing in the aerospace industry. In particular, several companies are now actively using 3D printing to create engines, interiors, and other parts of aircraft.
The Federal Aviation Authority has also recognized the emergence of 3D printing in the aerospace industry, preparing for the emergence of additive manufacturing by drafting the “Additive Manufacturing Strategic Roadmap”. The group working on the roadmap includes the US Air Force, the US Army, and NASA.
One of the major challenges in trying to regulate 3D printing in the aerospace industry comes from the wide variety of processes, materials, and methods being used and ensuring they all meet safety standards.
3D printing and additive manufacturing can save companies money, streamline the manufacturing process, reduce waste, and open possibilities for more innovative designs. Here are a few examples of how 3D printing is being used in the aerospace industry right now.

GE Additive New Printer and ATP Engine

GE Additive, a branch of GE Technology, has recently taken the record for the largest industrial 3D printer built. The unnamed printer is capable of printing objects 1m in diameter using a 1 kW laser and thin layers of metal powder. The printer is also scalable so that even larger objects can be printed. The company intends for the printer to be used in industrial manufacturing for aircraft, automotives, and spacecraft.
GE has already been using 3D printing for aircraft manufacturing with the Advanced Turboprop.

The ATP which includes 3D printed parts. Image courtesy of General Electric.

By 3D printing the ATP, the required parts for the engine were reduced from 855 to only 12. The engine will make its debut in the Cessna Denali in 2019.

Using 3D Printing to Bring Down Costs of the 787 Dreamliner

Boeing has been losing money for each 787 Dreamliner they've produced for years—nearly $30 million for each $265 million dollar plane. This is largely due to the high cost of R&D and manufacturing. The design relies on the use of titanium, as opposed to aluminum, to keep the large jet airliner light and fuel efficient.
However, in early 2017, Boeing partnered with Norsk Titanium to begin using 3D printed parts in the manufacturing process to bring costs down, saving Boeing $3 million for each 787 produced.
One of the challenges with using 3D-printed parts for aviation is that each part needs to be approved by the FAA. So far, Norsk Titanium has received FAA approval for load bearing components and hopes to receive further approval for the rest of its manufacturing process to continue to bring down the cost of each 787 produced.

An FAA approved 3D manufactured component for the 787 Dreamliner. Image courtesy of Norsk Titanium.

The cost savings from 3D printing parts for the 787 comes from the reduced cost in raw materials used, as well as a reduction in the energy requirements for manufacturing.
It's important to note that Norsk Titanium uses a proprietary printing method known as Rapid Plasma Deposition. In this process, titanium is melted into argon in a gaseous state to print its parts using a MERKE IV RPD machine. Given the expensive and custom nature of this form of 3D technology, it's unlikely that most industries will get their hands on it terribly soon without contracting Norsk Titanium, themselves.

Archinaut: 3D Printing in Space

The advantages of 3D printing even extend beyond Earthly airspace. A company named Made in Space has been making gains in space-based 3D printing with its Archinaut project. Archinaut solves one of the most limiting factors of putting large building structures in space: size, space on launch vehicles, and the cost of launching.
By using a combination of 3D printing and automated, robotic devices, large structures can be printed on demand in space using polymer-alloys. This opens up a range of possibilities for manufacturing space objects, like large telescopes.
Made in Space currently has two 3D zero-G printers on the International Space Station and plans to have their Archinaut project operational sometime in the next decade.




3D printing has been a tool of choice for hobbyists and startups to build enclosures, but it's been generally slow to appear in professional settings. This large-scale use of 3D printing in aeronautics represents a large step for this emerging technology.
Have you worked with 3D printing in a professional setting? Share your experiences in the comments below.


Feature image courtesy of General Electric.

Towards Low-Power Wearables: A New Biopotential and Bioimpedance Sensor from Maxim Integrated

$
0
0
Maxim Integrated offers their new low-power and high-performance IC for bio measurements.
The MAX30001 IC, from Maxim Integrated, serves as a complete analog front-end (AFE) solution for performing biopotential and bioimpedance measurements for clinical and fitness applications.
What, you might be asking, are biopotential and bioimpedance measurements? Well, according to the datasheet, the MAX30001's biopotential channel provides electrocardiogram (ECG) waveforms, senses heart rate, and can detect pacemaker edges, while the bioimpedance channel measures respiration. It all sounds like magic to me! Equally impressive is the IC's very-small package—actually, there are two package options.

Two Very Small Package Options

Depending on your design's requirements, when using the MAX30001 biosensor, you have two package types to choose from. One is a 28-pin 5mm × 5mm TQFN (thin quad flat no-leads) package; this package is better suited for those who might need to do some hand placement or manual soldering (such as with a hot-air gun), or those who simply don’t want to deal with a wafer-level package (WLP)... which is the other available packaging option. The 30-pin wafer-level package (2.7mm × 2.7mm) is meant for space-constrained applications. The image below show the pinouts and dimensions of these two package types.


Figure 1. Pinout and package dimensions of the two available package types (TQFN and WLP). Diagrams taken from the datasheet (PDF).

The datasheet also provides additional package and footprint (land pattern) information, and Maxim has kindly provided hyperlinks to their latest package and layout drawings (see image below).


Figure 2. Additional package and footprint information, from the datasheet (PDF).

For a tip on how to protect a WLP device from “accidental removal,” read the section entitled Wafer-Level Package Information in our article discussing the MAX16140.

Recommended Filtering Options

When using the MAX30001 biosensor, you don't need to worry about connecting an external signal amplifier or implementing filters, because (as they should be) these critical components are included in this AFE solution. As explained in the datasheet, this IC's ECG channel includes a low-noise fixed-gain instrumentation amplifier that rejects common-mode AC interference as well as differential DC voltage caused by electrode polarization.
The user can set, by means of a single external capacitor, the corner frequency for the DC rejection filter. Maxim recommends the following three corner frequency options:
  • 5Hz is recommended for heart rate monitoring applications. This setting provides superior rejection of motion artifacts at the expense of high-quality ECG waveforms.
  • 0.5Hz is chosen for applications where robust ECG waveforms are required and moderate motion-artifact rejection is acceptable.
  • 0.05Hz should be used when optimal ECG waveform quality is desired.
Table 2 of the datasheet (see image below) shows the recommended frequency-setting capacitor (CHPF) values. Furthermore, the Pin Description table suggests using X7R capacitors with high voltage ratings (25V) for improving the linearity of the ECG signal path.


Figure 3. Recommended capacitor values for setting the high-pass filter corner frequency. Table taken from the datasheet (PDF).

Low Power

The MAX30001 is advertised as being an ultra-low-power device, and this indeed appears to be the case according to the Electrical Characteristics table given in the datasheet (see image below). The lowest operational current/power consumption occurs when voltages VAVDD and VDVDD are both connected to 1.1V and when only the ECG functionality is utilized. Under these conditions, a mere 76µA (or 83.6µW) is used. During non-operational conditions (i.e., shutdown mode), only 0.58µA (typical) is used when VAVDD and VDVDD are both connected to 2.0V.
 

Figure 4. Current consumption specs. Table taken from the datasheet (PDF).

An Evaluation Kit Is Available

With the goal of helping you become more familiar with the functionality of the MAX30001—and, of course, with the goal of encouraging you to implement the MAX30001 in your biosensing design—Maxim offers their MAX30001 evaluation system (MAX30001EVSYS). Also, this video is both informative and educational in explaining the functionality and capabilities of the MAX30001.


The MAX30001EVS. Image from Maxim.


Have you had a chance to use the MAX30001? If so, leave a comment and tell us about your experiences.

How MOSFET Arrays Can Prevent Current Leakage in High-Voltage Systems

$
0
0
High-voltage systems now have the choice of using plug-and-play PCBs to automatically control leakage current in backup power circuitry.
There is a growing application of multiple supercapacitor cells in modules that serve the energy-storage needs of higher-voltage systems in datacenters, industrial automation equipment, and public utility infrastructure. But these high-voltage systems often demand safe voltage balancing.
That's because differences in leakage currents between individual supercapacitors result in changes in voltages, and overvoltage is a major cause of failures in supercapacitors. That can cause reduced lifespan and eventual failure since exceeding the maximum voltage on a supercapacitor leads to cell failure.
Therefore, designers need to balance supercapacitor cells in a stack of two or more to overcome overvoltage-related issues.
Enter plug-and-play PCBs that provide a platform for balancing high-voltage supercapacitors through a low-voltage, low-leakage, and low-current controlling method in a small form factor.
When there is an increase in the supercapacitor voltage caused by leakage current from another supercapacitor, it causes a decrease in the RDS(ON) of the MOSFET connected to the supercapacitor. And that increases the current, IDS(ON), and reduces the voltage.
The plug-and-play boards use MOSFET arrays to offer a circuit design that is more efficient than other passive or active balancing methods as it offers lower cost and takes smaller design space. These MOSFET boards automatically control leakage current and enhance the reliability of backup devices like 700V systems.

The schematic of a fully-populated board with three MOSFET arrays that serve four supercapacitors: C1 to C4. Image courtesy of Advanced Linear Devices Inc. (ALD).

Take, for instance, the MOSFET boards from Advanced Linear Devices Inc. (ALD) shown above, which balance for 2.8V, 3.0V and 3.3V supercapacitors arranged in a series stack by equalizing the leakage current of each cell.
The ALD SABMB810028 boards are built with the company's ALD810028SCLI SAB MOSFETs, which balance each cell through low levels of leakage current without exposure to supercapacitor charge/discharge voltage levels for cells of 3,000 Farad (F) or more.

A view of the plug-and-play board with MOSFET arrays. Image courtesy of DigiChip.

These boards allow supercapacitor cell charging and discharging currents to pass through the cells, themselves, directly. This bypasses SAB MOSFETs mounted on the board with near zero additional leakage current. It ensures that the average additional power dissipation due to DC leakage of the supercapacitor is zero.
It also makes them a viable alternative to methods where additional power dissipation used by the circuitry far exceeds the supercapacitor energy burn caused by leakage currents. So this method of supercapacitor balancing is highly energy-efficient and is well suited for low-loss energy harvesting and long life battery-operated applications.

The boards—rated for -40°C to +85°C temperatures—can balance up to four supercapacitors connected in series when fully populated. Moreover, populated boards are available with different combinations of MOSFETs to reach required voltages.

Using Low-Voltage Drivers to Boost RF Power Amplifier Efficiency

$
0
0
The growing use of wireless data is driving demand for communication systems that can transmit more data with greater energy efficiency, in order to both cut operating costs and, in mobile devices, to increase battery life.
It’s especially challenging for a transmitter’s power amplifier (PA) to meet both demands at once since it needs to achieve high average efficiency at the same time as coping with the high peak-to-average-power ratios (PAR) of the complex wideband modulation schemes used by the latest cellular standards.
The average efficiency of the PA is mainly determined by the efficiency of the driver and the end stage.
Ampleon built a two-stage GaN RF PA MMIC that uses a GaN transistor operating at low voltage as a driver. This increases the average efficiency of the overall PA by both decreasing the power consumption of the driver and eliminating the need for inter-stage matching between it and the end stage.
The end stage of the MMIC is terminated with a quasi load-insensitive (QLI) class-E load network to achieve high efficiency, despite the large output power variation caused by load modulation. This load network is built in a standard RF package, using bond-wire and package-lead capacitance.
Load-pull measurements show that the overall power efficiency of the PA remains greater than 70% despite a wide range of load modulations, such as an 8dB variation in output power. This sustained efficiency makes the MMIC useful for PA architectures that rely upon load modulation, such as the Doherty and Outphasing approaches.
To create a demonstration PCB, Ampleon uses their MMIC. The linear gain of this system measured around 27dB, with a maximum efficiency of 76% at an output power of 35.4dBm at 2.14 GHz. The supply voltages of the driver and the end stage are 5.5V and 25V, respectively. A WCDMA signal was applied with a vector-switched generalized memory polynomial digital pre-distortion (VS-GMP DPD) algorithm to our demo set-up, and achieved a –52.4dBc adjacent channel leakage ratio at 29.4dBm average output power.

Circuit Architecture

The schematics of the conventional and the new low-voltage driver RF PA line-ups are shown at left and right of Figure 1, respectively.


Figure 1: Left: the conventional high-voltage driver RF PA. Right: the low-voltage driver RF PA. 

The conventional approach uses the same supply voltage for both the driver and the end stage, which means the overall PA needs a matching network between its driver and end stage. Using a low supply voltage on the driver reduces its output impedance enough that the overall PA does not need such a matching network, which reduces power losses. The low driver supply voltage also reduces the driver’s power consumption, which improves the overall efficiency. Getting rid of the inter-stage matching network also cuts the size of the MMIC, so reducing costs.
Fig 2 compares the overall efficiency of the conventional high-voltage and the new low-voltage driver RF PA topologies in a simulation. Although the simulated drain efficiencies (DE) are almost the same for high-voltage and low-voltage cases, there is a significant difference in power added efficiency (PAE).


Figure 2. Simulated comparisons of drain efficiency (top) and power added efficiency (below) for the twp driver architectures when the PA is operating at 2.14 GHz. 

The Two-Stage GaN HEMT MMIC Design

The MMIC was built as a two-stage amplifier using Fraunhofer’s IAF 0.25mm GaN HEMT technology on a multi-project wafer. The driver stage and the end stage have 0.488mm and 2.4mm total gate widths, respectively. The end-stage transistor and the driver transistor are integrated on a die with the AC-coupling capacitor and gate bias resistors, as shown in Figure 3.

The In-Package Quasi Load-Insensitive Class E Load Network

The end stage of the MMIC is terminated with a QLI Class E load network, to ensure high efficiency despite a wide variation in output powers caused by load modulation.
The MMIC and its QLI Class-E load network were assembled in a SOT1112A standard Ampleon air-cavity ceramic package, and used bond-wire and package lead capacitances to produce two key reactive elements: L1 at 4.9nH and C1 at 1.5pF.


Figure 3. A schematic of the assembled MMIC and its load network.

Figure 4 shows the efficiency of the packaged MMIC under load-pull measurements, demonstrating that the packaged MMIC can sustain its high efficiency under a large load variation.


Figure 4. Drain efficiency and power added efficiency of the low voltage driver MMIC during pulsed load-pull measurements at 2.14 GHz.

Building a Demo Board and Measuring it with a Modulated Signal

To further prove the value of the low-voltage driver approach Ampleon designed a PCB to mount the MMIC on, and tuned its output load to match the impedance at which the MMIC alone achieved its maximum efficiency under the load-pull measurements referenced above.
The PCB was prepared using Rogers RO4350B as the substrate and is shown in Figure 5 with its biasing and matching components.


Figure 5. Components and their values on the PCB, with inset, the mounted PA.

Figure 6 shows the drain efficiency, power-added efficiency and gain of the mounted PA, measured with a 2.14GHz continuous-wave signal. The peak PAE is 76%. The driver power consumption is so low that the difference between the drain and the power-added efficiency is negligible at low and high output power levels. The measured small signal gain is around 27dB at 2.14GHz.


Figure 6. Measured gain, drain efficiency (black line) and power added efficiency (red squares) of the final circuit. 

Conclusion

Using low-voltage driver circuits can help achieve high overall PA efficiency in 0.25μm GaN HEMT technology. Our measurements show that a low-voltage driver MMIC, assembled in an RF package with a QLI Class- E load network, can create a PA whose efficiency remains greater than 70% in the presence of an 8dB output power variation. This makes the MMIC a good candidate for use in PA architectures that rely upon load modulation, such as the Doherty and Outphasing approaches.
This article was co-written byMustafa Acar, Osman Ceylan, Felicia Kiebler, Sergio Pires, and Stephan Maroldt. 

Teardown Tuesday: Samsung’s SmartThings Smart Home Hub and IoT Water Leak Senso

$
0
0
In this teardown, we open up Samsung's SmartThings Hub and a Water Leak Sensor to see what we can find.
Samsung's SmartThings is a line of smart home products designed to work together. The SmartThings Hub is a device intended to connect various IoT sensors for home monitoring, security, etc.
Between the professional packaging material, the manner in which it was packaged, and all the other goodies included in the Hub's shipping box, I'm confident that this is another high-quality-built device from Samsung.


The SmartThings Hub arrived in a very professional looking package... not a flimsy generic box.


All the goodies that came in the Hub's box.

This SmartThings Hub is designed to serve as the brain for all your smart home needs and devices. And when using the SmartThings app, or when connecting the hub to Amazon's Alexa, you can teach your house how to react whether you're awake, asleep, away from your house (like on vacation!), or when you're arriving back home. Some of the advertised specifications of this magical hub include:

Power:
  • In-wall power adapter (100-240VAC to 5VDC 2 A)
  • 10 hours of backup power via four AA batteries
Communications:
  • ZigBee
  • Z-Wave
  • Bluetooth
  • IP-accessible devices
Compatible smart device brands:
  • Honeywell
  • Philips Hue
  • Kwikset
Now on to the teardown!

External Design

There are only two exceptions to the device's clean design: the two USB ports. Although they appear to be connected to the ARM processor—meaning they aren't simply "dumb" power ports—the Quick-Start Guide doesn't address their intended purpose.
I imagine that not too many people would use this SmartThings Hub as a USB hub. It might make sense to simply remove these two ports from the design in an effort to make this device a cleaner and simpler hub.


The very clean and simple exterior design.

Designed for Easy Disassembly

The hub's bottom side easily slides out to expose the battery compartment and the battery compartment's plastic housing is almost as easily removed by unscrewing five screws. While only four screws are clearly visible in the image below, the fifth screw is covered by the top label.


The battery compartment is easily accessed.

Once the five screws are removed, the hub's sole PCB is exposed—no additional screws are used for securing the PCB in its place. So in about 5 minutes, the PCB can be easily and completely removed from the hub's enclosure. Nice!


The PCB can be easily and quickly removed from the enclosure.

The Hub's PCB... A Simple Yet Powerful Design

PCB Top Side

Although Samsung did an admirable job in keeping the PCB design and layout rather straightforward, there's actually a lot going on here:
  • Multiple wireless circuits/modules and antennas:
    • ZigBee           
    • Bluetooth
    • Z-Wave
  • Four voltage regulators:
    • Three of the onboard generated voltages are courtesy of the 3-channel voltage regulator.
    • The remaining voltage regulator is an LDO.
  • MOSFETs
  • Main processor (the brain!)
  • Memory (DRAM and eMMC)
  • EMI countermeasures
The PCB is obviously a multi-layered board; the top and bottom layers, which are ground planes, are stitched together using vias, which also serve as one of the aforementioned EMI countermeasures. These closely-placed ground vias—which lie on the circumference of the blue-colored portion of the PCB—together with the grounded top and bottom ground planes, mimic a Faraday cage.


The Hub's PCB top side. Click to enlarge.

  • ZigBee SoC: Part marking: EM3587
  • ZigBee front-end module: Part marking: SiGe 2432L
  • eMMC: Part marking: KLM4G1FEPD
  • Ethernet transceiver: Part marking: KSZ8081
  • USB voltage level shifter (assumed): Part marking: X42 (no datasheet could be found)
  • USB power switch: Part marking: AP2511
  • N-Channel MOSFET: Part marking: 72K
  • Schottky diode: Part marking: B2L1
  • Transistor: Part marking: 1NT
  • Voltage regulator (3-channel): Part marking: RT7273
  • BLE SoC: Part marking: N51822
  • Voltage regulator (LDO): Part marking: SiPex 29302T5
  • DRAM: Part marking: K4B4G1646D-BCK0
  • Transistor: Part marking: K46
  • Processor (ARM): Part marking: MCIMX6L2DVN10AB

PCB Back Side

The PCB's back side consists of the various connectors, the battery contacts, a variety of passive components, and the Z-Wave module. Other than these components, this side of the PCB is rather vacant.


The Hub's PCB back side. Click to enlarge.


The image below shows the Z-Wave IC that lives underneath the EMI shielding can (another EMI countermeasure).


The Z-Wave Module with its EMI shielding can removed.

  • Z-Wave SoC: SD3503A-CNE3
  • Voltage regulator (assumed): Part marking: S8US1871 (no datasheet could be found)

On to the Water Leak Sensor

The SmartThings Water Leak Sensor comes in a professional-looking, clean, small—yet sturdy—enclosure. For quick access, it has no screws holding it together; it can easily be popped-open after depressing the tabs on its sides.


The Water Leak Sensor

After inspecting the PCB's ICs, I was surprised to see a temperature sensor. I wouldn't have imagined that this water leak sensor requires temperature readings.
As far as the circuitry used for detecting water, the exact design approach escapes me. A few resistors are in series with the water-detecting contacts, and some transistors/MOSFETs are also used in the design. But exactly how it all works together, as a system, is not entirely clear to me. I imagine the resistance is being measured between the two water detecting contacts, but this is just a guess.


The Water Leak Sensor's components

  • ZigBee SoC: Part marking: EM3585
  • ZigBee front-end module: Part marking: SiGe 2432L
  • Temperature sensor: Part marking: Si705
  • BJT transistors/MOSFETs: Part markings:
    • 1AM
    • 2AR (no datasheet could be found)

Conclusion

Both the SmartThings Hub and the Water Leak Sensor look impressive regarding their advertised functionality and there visually observed simple, yet powerful, designs.

Have you had a chance to use either of these devices? Can you shed some light on the water sensor design? If so, let's us know in the comments below.

Teardown Tuesday: Depstech HD Wi-Fi Inspection Camera/Endoscope

$
0
0
In this teardown, we cut into a Depstech endoscope to see what we can find.

First Impressions

Before beginning the tear of Depstech's HD WiFi Inspection Camera, I tested it (played with it) a bit to see how well it worked. In a word: Wow!
This is an amazing piece of inexpensive technology. For less than $40 (at the time of writing this article), this is an easy-to-use and waterproof Wi-Fi endoscope that works seemingly flawlessly. Connecting it to my iPhone 5s was a snap as the free app looks to be bug-free. Take this information with a grain of salt because I only tested the camera for about 5 minutes before tearing it open, so by no means did I put this camera (and its app) through the ringer of exhaustive testing.
The video link below shows my very brief testing of this endoscope, including a water test. Note that the camera's video footage can be seen on my iPhone's screen. During the mere 5 minutes that I tested the camera, I did notice—as called out in the user manual—that it got a little warm/hot due to the LEDs being set to their maximum setting.


The image below shows everything that was included in the shipping box.


Everything included in the packaging box.

Specifications

The following specs were taken from Depstech's website:
  • Resolution: 1280×720, 640×480, 320×240
  • Camera diameter: 8.4mm
  • Waterproof: IP67
  • Wi-Fi transmission distance: 15 meters
  • Battery capacity: 600mAh
  • Focal distance: 3CM-6CM
  • Power supply: DC 5V

The images below show various views of the Wi-Fi box as well as the camera accessories.


Multiple views of the Wi-Fi box, including its label.


Accessory pieces. Image courtesy of Depstech.com.

Okay, enough playin' around...let's tear this thing open!!

Opening the Wi-Fi Box

I began the teardown process by opening the Wi-Fi box. This plastic box is held together with four small Phillips-head screws. Once the lid is removed, a rather small PCB becomes visible, as well as the rechargeable lithium-ion battery (see image below).


Inside the Wi-Fi box is a single PCB and the rechargeable lithium-ion battery.

Although the functionality of the camera seemed to work perfectly, the assembly quality of this box seems to be less than ideal. In the image above, note the missing screw and the low-quality tape attached to the battery (presumably to prevent it from flopping around). Also notice the standoffs that are cracked or broken.

The PCB Inside the Wi-Fi Box

PCB "Front" Side

The image below shows one side (let's call this side the front side) of the PCB. And although the ICs are clearly part marked, it was not always possible to find their datasheets.
 

Major components on the PCB's front side.

The list below calls out the major electrical components and their part markings; included, when possible, are links to datasheets or additional information.
  • Crystal: 40.0MHz. Part marking: 40.0 SJJ.
  • Voltage regulators:
    • Part marking: A17K (qty 2)
    • Part marking: B6287
  • Schottky rectifier. Part marking: SS14
  • Processor/Wi-Fi controller: Ralink RT5350
  • Regulator (either voltage or constant-current): Part marking: S2QA

One final observation when viewing this side of the PCB: the hot glue, applied to the five wires that are soldered to the PCB, serves as strain relief for those wires. Although this approach is better than not having any strain relief and it's most likely a very inexpensive solution, this method is not ideal.

PCB "Back" Side

Let's take a look at the PCB's back side. Again, the major components have been noted in the image below:


Major components on the PCB's back side.

The components found on the back side of the PCB include:

As can be seen in the image below, the lithium-ion battery has protection circuitry attached to a small PCB of which is soldered to the leads of the battery.


Major components on the PCB's back side.

The components on this PCB include:
  • Unknown IC (perhaps a voltage regulator): Part marking: G3JU
  • Dual FET: Part marking: 8205A

The Camera Probe

Let's now turn our attention to the camera probe. As noted in the user manual, this camera has an IP67 rating, which means that it can withstand some water. To quote the user manual: "The camera probe is only IP67 waterproof which means it can only do underwater inspections for no deep than 1meter and no more than several minutes." Kudos for being IP67 rated. Unfortunately, this means that the probe was rather difficult to open-up for inspection.
The camera probe housing is made from metal. After observing that it had multiple pieces screwed together, I attempted—and failed—to unscrew these pieces. The video link below shows my futile efforts of using vise-grips to unscrew the two pieces. Regrettably, this approach only marred the metal enclosure.


My next attempt to see inside the camera probe—and to penetrate its IP67 protections—was to fire up the Dremel. For the record, I decided to use this mini grinder only as a last resort. Using a Dremel will most definitely result in the destruction of a device, and I always feel a bit guilty when destroying a perfectly working device... especially one as cool as this wireless endoscope.
The video below shows me cutting (literally) into the camera's probe. Although you can't see my head in the video, I am wearing hearing protection and a safety face shield.


Success! ...sort of.
Whenever cutting into an enclosure, one always runs the risk of unintentionally damaging unseen components. This was the case here, and, of course, the main IC inside the enclosure is what was damaged. As can be seen in the image below, I managed to grind off the image sensor processor's part marking.
Nonetheless, the PCB—actually it's a flex circuit—that lives inside the camera probe's enclosure looks mighty impressive, yet simple.


Camera probe's PCB (front side)—it's a flex circuit.

The back side of the PCB contains two voltage regulators and some potting/epoxy:
  • LDO voltage regulators:


Camera probe's PCB (back side).

Also integrated on the flex circuit is an LED ring consisting of six LEDs. This ring is attached to the camera's lens, which simply screws into the camera's module housing (see image below).


Camera lens and LEDs.

When looking down/inside the camera module, the camera's image sensor can be seen. This sensor is also attached/soldered to the flex circuit. The image below shows this sensor through a microscope.


Camera's image sensor mounted on the flex circuit.

Unfolding the Flex Circuit

The final tear down step was the unfolding of the flex circuit. After desoldering the wires and cutting off the glued-on plastic image sensor housing, it was quite easy to simply unfold the PCB/flex circuit.


The flex circuit unfolded.

The metal pieces on the back side (which are glue-on) serve as stiffeners for this flex circuit.

Conclusion

As I mentioned at the beginning of the article, this Wi-Fi HD endoscope is quite impressive. The observed low-quality assembly of the Wi-Fi box can be easily overlooked when considering the high-quality factors of this device, namely, the IP67 waterproof rating and the integrated flex circuit that comprises the LED lighting ring, the image sensor processor, and the image sensor, itself.

Teardown Tuesday: Samsung Wireless Charger (EP-PG9201)

$
0
0
In this teardown, we tear open Samsung's wireless charger to examine its internal components.

Visual Inspection

Samsung's wireless charging pad (model EP-PG9201)—which is/utilizes a technology commonly known as wireless power transfer—is advertised as being Qi (pronounced "chee") certified that allows for the charging of compatible Galaxy smartphones and other Qi-compatible devices.
This solidly-built charger is rather lightweight with a very nice style and an excellent fit-and-finish feel—I would expect nothing less from Samsung.


Samsung's Wireless Charger (EP-PG9201). Image courtesy of Amazon.

Oddly, I'm unable to find any specification which states how much power this pad is capable of transferring. It can be assumed, however, that the device will transfer no more than 9 W—this value is based on the label's input specification of 5.0 VDC at 2.0 A and assumes a system efficiency of 90%.


Label on bottom-side stating the input voltage and current specifications.

No External Screws

In an effort to save money on material and labor costs—or, perhaps, in a futile effort to prevent people (like me) from tearing open this device—Samsung designed this wireless charger without any external screws. I realized this fact only after removing the top- and bottom-side adhesive padding as well as the bottom-side label.


No external screws are used.

I used my fingers and hands to twist, pry, and pull, and then applying heat using my trusty heat gun followed by more prying and twisting—all with no luck. After that, I resorted to prying the enclosure open by using not one but two flathead screwdrivers—I actually put on my safety glasses because I didn't want to accidentally poke myself in the eye with the screwdrivers.
This charger seemed to be nearly impenetrable short of using a hammer or a Dremel, which I seriously considered using. Only after separating the two pieces did I see Samsung's concealed attaching mechanism: the top and bottom pieces twist-and-lock together. However, given the right tool (say a very-thin file) the locking mechanism can be released which would then allow the two pieces to unscrew, but still probably with some effort (see images below).


Four tabs and slots are used to secure the two pieces together with a twisting action.


The locking mechanism. The tab on the left can be depressed using the right tool, thus allowing the device to be unlocked. The enclosure can then be opened by untwisting the top and bottom pieces.

Internal Electrical Parts and Assemblies

This charging pad performs its wireless power transfer magic using a single PCB and one coilalbeit— a rather very large coil—which is typical for wireless charging devices. The image below shows how both the PCB and the coil are attached to each with a single plastic disc separating them.


The internal electrical subassemblies consist of a single PCB and a large coil.

Examining and Dissecting the PCB

The PCB is impressively simple, at least when referring to the few electrical components that it uses (see image below). And although the ICs are clearly marked, I was unable to locate datasheets for the voltage regulator and the "current flow devices," which I assume to be either high-speed diodes or FETs.


The major electrical components identified.

  • Voltage regulator: IC marking: 9519 H451
  • Current flow devices (qty 4): IC marking: AIW 7JAB

Removing the EMI Shielding Can

Since no processors were found anywhere else on the PCB, it makes sense that the IC underneath the shielding can (an EMI countermeasure) is the brain of this wireless charging system.
The IC that is located underneath the shielding can is clearly shown in the image below. This IC is manufactured by IDT with part marking of P9235, which is a transmitting controller for wireless power transfer applications of <3w by="" idt.="" made="" p="">

IC underneath the shielding can is the brain of the system.

Conclusion

Samsung's new wireless charging pad (the EP-PG9201) looks to be a simple yet well-designed and well-built wireless power transfer device. I personally don't own wireless power transfer devices (either transmitters or receivers), so I can't comment on how well this device works. I can say, however, that based on the maximum power transfer specifications (3W) of the IDT controller together with the listed input voltage and current specifications of this device (5V and 2A), this system looks to be quite inefficient (30%). Perhaps this low efficiency is on par with other similar wireless power transfer systems.




If you have experience with designing or analyzing wireless power devices, please let us know your experiences in the comments below.3w>

Altium Releases Designer 18 PCB Design Software

$
0
0
Just in time for the new year, Altium LLC has announced the newest release of the company’s flagship PCB design software—Altium Designer 18.

What differentiates this release from its predecessors are several improvements in performance, user interface, and design tools to make the process of creating, managing, and manufacturing complex PCB designs as seamless as possible. To identify areas of improvement, the company used feedback from the Altium user community as well as its own research and development efforts.
Here is a feature overview of Altium Designer 18.

Ability to Handle Larger Designs

Altium Designer 18, unlike its predecessors, now takes advantage of 64-bit architecture with improved code for multi-threaded execution. This gives Altium Designer greater access to computer memory to handle large designs and better algorithm execution to make common tasks faster and more efficient. Things like generating Gerber files, design rule checking, and switching from 2D to 3D should feel quicker and more efficient as a result. This also helps those designing multiboard projects, which can eat up a lot of memory.
Between Altium Designer 16 and Altium Designer 18, the following performance benchmarks were provided on a 4-layered project with 39.6k tracks, 1925 components, 1267 nets, and 369 polygons:
  • Polygon Repour - 5.0x faster (12:56 mins vs 2:36 mins)
  • Gerber File Generation - 155.6x faster (2:33 hours vs 0:59 minutes)
  • Online DRC - 5.6x faster (32:30 mins vs 5:46 mins)
  • File Opening with Scene Building - 6.7x faster (9:36 mins vs 1:26 mins)
  • Project Compilation Time - 3.38x faster (0:54 mins vs 0:16 secs)
Speeding up common tasks in Altium Designer 18 makes PCB designing a smoother experience, and can help get designs to manufacturers or clients faster.

Usability Changes for Menus

A demo of Altium Designer 18 featuring a new UI. Image courtesy of Altium LLC.

Menus in Altium Designer have been reconfigured to make the workflow smoother. Commands and menus deemed “low usage” were removed, though there isn't much information available to suggest how they specifically defined what that term entailed.
Added menus and commands were:
  • New properties panel combines the Inspector Panel and properties dialog displays important information about the PCB design in one place. 
  • Global search function for quick access to commands, information, or design objects (such as component libraries). 
  • New layer and control panel also gives more control over layers and masks, as well as provides filtering options to make focusing on layers of interest easier. 
  • New active bar will provide a place where most frequently used commands can be quickly accessed or customized to the users needs.

Interconnected Multi-board Assembly, ActiveRoute, and PDN Analyzer

Altium Designer 18 also has a new project type: multi-board design project. Previously a feature only available in the most high-end tools, multi-board project management is now available to Altium users for high-density, multi-board projects. Designers can work on multiple boards in one environment, manage connections, synchronize pin swaps across connections, and flag errors in connections or dissimilar net names. Multi-board projects also provide designers the opportunity to mechanically model their designs, and check for component alignment and collisions. The outcome is more accurate prototyping, fewer iterations, and better design accuracy.

A 6-sided PCB multi-board project with collision detection. Image courtesy of Altium LLC

PCB routing can be a long and arduous task even for the most seasoned PCB designer--high-speed auto-routing can lessen the burden and help guide the user. With a new and improved ActiveRoute feature, automated routing can be fine-tuned and adjusted using rules-driven length and phase tuning, as well as features meander controls, glossing, and pin-swapping.

Length tuning in ActiveRoute. Image courtesy of Altium LLC

Altium Designer 18 also features PDN Analyzer 2.0 (Power Distribution Network), with a more intuitive and appealing user interface, more powerful features, and improved accuracy. The PDN Analyzer 2.0 can analyze multiple power nets concurrently, provide current and voltage limit checks, and provides detailed reports.

Streamlining the Bill of Materials

Finally, in one of the last steps of PCB design is the creation of the bill of materials. The ActiveBOM feature in Altium Designer 18 connects to information from vendors to provide real-time information on component availability and price so that information can be accessed throughout the design process before final decision making at the end.
Several user requests have also been implemented to make creating the BOM better—among those improvements are persistent Item/Line numbering, and aliasing for parameters/column names.

ActiveBOM screenshot. Image courtesy of Altium LLC.
(Click to enlarge)



Altium Designer 18 is available as a free upgrade to already existing Altium subscribers. Otherwise, a free trial is available through the website to give you a taste before buying it.
If you've had a chance to work with Altium Designer 18, please share your experiences in the comments below.


Featured image courtesy of Altium LLC

Possible IC Packaging Shortages in 2018

$
0
0
Experts in semiconductor manufacturing are predicting longer lead times and higher demand for IC packaging in 2018.

This news comes during a time when the electronics industry is already facing several shortages and high volume demands for components such as OLED displays, DRAM, and NAND flash memory.
This shortage is being caused by a perfect storm of different factors, among which are poorly-forecasted demands for 2017, new high-demand industries, and longer wait times on raw materials. These factors go all the way down the supply chain, making for a complex issue to solve.
Currently, major vendors of IC packaging are experiencing high-to-full capacity production volumes, just barely keeping up. These vendors have experienced higher demand for their products than anticipated—instead of just a 3% growth, there was a 7% increase in IC packaging demand in 2017. Interestingly, the first two quarters of 2017 were as expected, but the last half of the year has seen a surge in demand.
Experts say that there appears to be a slight dip coming up in demand and they will have to wait and see what happens in Q1 2018 to know if this is a passing phase or a buck to the trend.

Not All Packing Is Created Equal

Not all IC packaging types are being affected. However, the more common and desirables ones have been naturally most affected, particularly the packaging types that rely on 200mm wafer bumping. These include:
  • Chip Scale Packaging (CSP)
  • RF Front-End Modules (RF FEM)

Fan-In Chip Scale Packaging. Image courtesy of STATS ChipPAC.

Unrelated to 200mm wafer bumping capacity, other IC packaging types facing shortages include:
  • Quad-Flat No-lead (QFN)
  • Wafer Level Packaging (WLP)

 
Top: QFN Packaging (via Wikipedia), Bottom: Wafer Level Packaging (image courtesy of © Raimond Spekking)

Further compounding wait times is the fact that the lead frames used in QFN are also in short supply. Fewer lead frame manufacturers are available, since many have turned to manufacturing connects or other electrical components that provide a higher margin on increasingly difficult to obtain copper-alloys. The wait time on lead frames has increased from 3 to 4 weeks, to as long a 10 to 12 weeks.
Closer to the end of the chain, manufacturers are also experiencing longer wait times on manufacturing equipment, which could help increase capacity. Wirebonders, for example, have had lead times as long as five weeks (a two-week increase from normal).
This rising demand in these specific IC packaging types can be attributed to front-end RF devices in mobile phones, components for automotives, and an increase in IoT applications—all three being areas that are expected to continue to grow in the coming years.

How Does this Impact Designers and Consumers?

If the IC packaging demand slows down, or if manufacturing capacity can keep up, then a short period of waiting might not be earth-shattering. It still isn’t the best news for smaller business or designers, who may still lose money on having to wait a few extra weeks for the components they need.
However, if a longer lasting shortages continue, it’s hard to predict what kind of outcome that could have. With the NAND flash memory shortage, there were concerns that too much inertia could be created by companies trying to make up for the shortage, saturating the market. So far that hasn’t occurred yet.
A shortage lasting a year or longer might start financially hurting some businesses or small companies. Longer shortages could force designers to start looking at alternative components and materials.
Have you experienced fallout from component shortages in 2017? Share your experiences in the comments below.


Featured image color-adjusted and resized. Courtesy of © Raimond Spekking.

Q# Is for Quantum Computing: A New Programming Language from Microsoft

$
0
0
Microsoft recently released a preview of a new programming language that will be used specifically for quantum computing programming: Q# (pronounced ‘Q-sharp’).
The company’s goal is to eventually create a full software stack that will give interested developers a chance to learn about quantum computing programming before the technology becomes more readily available.
Built from the ground up to support quantum computing programming, Q# is a high-level programming language meant for writing scripts that will execute its sub-programs on a quantum processor that is linked to a classic host computer which receives its results. This is not unlike hybrid computer architecture types such as CPUs and GPUs, or CPUs and FPGAs.
Developers using the language need not have in-depth knowledge of quantum physics. For the interested, Microsoft does provide a primer on essential quantum computing concepts, covering vector and matrix mathematics, the qubit, Dirac notation, Pauli measurements, and quantum circuits.
The Q# development kit is available for free with detailed instructions on how to install it and introductory programming tutorials. Q# compiles on a Visual Studio quantum simulator, simulating a 32 qubits quantum processor. The Azure edition of the simulator can simulate up to 40 qubits.
Microsoft expects that a quantum computing stack will contain several different layers of software and hardware all running at different temperatures to operate. For example, cryogenic processors or FPGAs are likely going to be required to handle error correcting in quantum computers, and a classical host computer will also work in tandem with the quantum computer since qubits are not stable.
Q# is meant to abstract away from the requirements of managing all of these layers from the developer, so that the focus can remain on algorithm development and problem solving, using a language that looks familiar.

What does Q# Look Like

At first blush, the Q# programming language looks not unlike most other programming languages, and is very similar to its C# counterpart.
The very first tutorial provided by Microsoft involves creating a Q# Bell State script—the four entangled states of two qubits. The end result leads to observing entanglement in two measured bits in the output of the program. A later tutorial walks the user through writing a script to simulate quantum teleportation. Microsoft hopes that introducing such a novel concept to would-be developers may pique interest in the language and quantum computing.
Q# has a few interesting primitive types. In addition to the more typical ones such as int, double, bool, and string, there is also a Pauli, Range, Result, and Qubit type.
There are also many Q# quirks in the language, including functions being referred to as operations, and so on.

Quantum Circuit for Teleportation. Image courtesy of Microsoft.

operation Teleport(msg : Qubit, there : Qubit) : () {
body {
using (register = Qubit[1]) {
let here = register[0];
H(here);
CNOT(here, there);
CNOT(msg, here);
H(msg);
// Measure out the entanglement.
  if (M(msg) == One) { Z(there); }
  if (M(here) == One) { X(there); }

}
}
  }
Teleportation.qs script from the Q# tutorial. Tutorial available here.

For the more algorithmically inclined, it might be worth checking out the Quantum Algorithm Zoo for ideas on how to play with Q#.

Quantum Computing for Solving Hard Problems

Quantum computing is expected to disrupt many industries and fields once it becomes available and ubiquitous. Many encryption methods being used today will no longer be effective against quantum computing, including RSA.
However, quantum computing will also help us solve pretty complex problems. It will even solve the encryption problem it initially undoes, since quantum encryption will be, as far as we are concerned, completely secure.
It will also become possible to model chemical and protein interaction for drug design and could open the door for individual drug design therapy, where drugs are developed based on an individual’s genetics. Or help us address climate change through weather and climate prediction modeling. We’ll be that much closer to successfully modeling the human brain, creating much more capable artificial intelligence, and basically making a leap in every major tech domain.


For now, we can prepare ourselves by becoming acquainted with Q# and being ready for when we can start putting our quantum algorithms to work.

Feature image courtesy of Microsoft.

Wireless Charging: A New Wireless Battery Charger Transmitter from STMicroelectronics

$
0
0


A new wireless battery charging transmitter from STMicroelectronics tries to raise the bar with faster charging times with higher efficiencies.
Wireless charging is a growing trend. Although we may see many more wireless charging applications in the future, smartphones, tablets, and wearables seem to be the most popular and common today. All wireless power transfer systems, also known as wireless charging systems, require a transmitter and a receiver. Wireless-charging protocols make it possible for wireless power transfer to be governed by communication between the transmitter and receiver.
STMicroelectronics, or known simply as ST, has released the STWBC-EP, a high-efficiency wireless power transmitting IC. The STWBC-EP transmitter is able to control the amount of energy transferred to the receiver by modulating the duty cycle, amplitude, or frequency of the transmission. This IC is a single coil transmitter optimized for applications requiring up to 15W of power. And by generating the correct amount of power, the highest levels of end-to-end efficiencies are achieved, even during light load conditions. Also, this IC is able to charge devices up to three times faster due to its "stepwise increase to higher power levels," according to the STWBC-EP flyer.


Figure 1. A typical wireless charging system; the STWBC-EP is the transmitting controller. Image taken from the datasheet.

A Reference Design Is Available

If you're new to the wireless charging arena, or if you would like a little extra help with your designs, ST offers the STEVAL-ISB044V1, which is their wireless charger transmitter evaluation kit/reference design that uses the STWBC-EP transmitter controller.


Figure 2. ST's wireless charging reference design (STEVAL-ISB044V1) uses the STWBC-EP transmitting controller. Image courtesy of the reference design's user guide.

Supports USB Vin and Provides 15W, with a Caveat

As advertised, this IC is capable of providing up to 15W of power to a receiver load and also supports USB voltage inputs. However, as stated in section 5.5 (Input power supply management: VMAIN, QC_IO), 15W is only available when the USB wall adaptor is capable of providing 12V when requested to do so by the STWBC-EP. Unfortunately, the maximum power available when using USB 5V or 3.3V is not listed.


Figure 3. Package information, from the datasheet.

Requires Filtering

According to section 5.1 (Power supplies: VDD, VDDA, VSS, VSSA, VOUT) of the datasheet, "VDD and VDDA should be correctly filtered to allow the correct operation of the device." Unfortunately, the datasheet offers no explanation for how to correctly filter these pins. Additionally, section 5.2 (DC/DC converter: ...) calls for a "second order passive filter" for properly generating DCDC_DAC_REF, and a "filtered current sensor" must be connected, as noted in section 5.4 (Wireless power functions: ...), to the ISENSE pin, but again no additional information is provided. It's all very strange that no guidance on the filtering requirements is available.
Since this is, after all, a new device, maybe ST is still working out the filtering details and will update their datasheet when all the wrinkles have been ironed out. Perhaps the proper filtering schemes can be gleaned from the reference design, but this approach would be inefficient and time-consuming. We'll make a note in this article if we become aware of any such updates.

Not Tested in Production

If you're planning to use this part in a new design, it might be worth your while to call ST to inquire about the items listed as "Data based on characterization result, not tested in production." See image below. This note is not uncommon; I have observed similar notes on other recently released semiconductor products. What exactly does this note mean, especially considering that it is attached to numerous specifications given in the  STWBC-EP datasheet? Can we trust these specs, or not?
In any event, I imagine that ST will be happy to keep you updated on their production testing results if you contact them and request whatever new information they have.


Figure 4. An example of "not tested in production" specs, from the datasheet.


Have you had a chance to use or test this new wireless battery charger transmitter or its reference design/evaluation kit? If so, leave a comment and tell us about your experiences.

The FitByte: How to Make an ATtiny85 Powered Activity Tracking Wearable

$
0
0
In this project, we'll build a protoype of a wearable fitness device designed to vibrate when it detects stagnation. This device is low-cost and can help keep you on the move.


I love data, especially when you can use it to improve your life. But sometimes less is more. Activity trackers are great at helping you set goals and track improvement but the only feature that I personally need is a reminder to be more active. But who wants to walk around with a dorky band on your wrist when you can be decked out in PCBs that vibrate when you’re lazy?
Several times a week, I will look down and realize that I have been frozen at my desk for hours. This is what inspired the FitByte, a simple activity tracker that will notify you when you are inactive for a pre-set period of time. This is a simple build with a bit of through-hole soldering. The finished product will fit on your wrist, is a great conversation starter, and helps promote that healthy lifestyle we all pretend to be living.

Bill of Materials


Schematic

The heart of this project is the ATtiny85 which, although small in size, packs enough punch for this project. This microcontoller can be programmed with the Arduino IDE and is easy to fit into projects to keep cost and size down. With three analog inputs and two PWM outputs, the ATtiny85 has just enough I/O for this project.
For our activity sensing needs, I am using the MMA7341LC 3-axis accelerometer which outputs each axis on a different analog line. This accelerometer also has a sleep mode that can be activated by the microcontroller to improve battery life. Our activity reminders will come through a disc vibration motor which, despite its small size, is powerful enough to be felt without drawing attention to your lethargic lifestyle. I found a powerful enough vibration by directly driving the motor from the AtTiny but a small transistor could be added to improve vibration performance.
Everything can be wired up according to the following wiring diagram:



Please Note:
  1. Capacitors C1, C2, C3, C4, and resistors R1 and R2 are contained in the MMA7341LC breakout board used for this project. 
  2. The output impedance of the accelerometer is 32 kΩ and the signals connected to the ADC should have output impedance less than 10 kΩ; thus, it would be better to buffer the analog signals prior to analog-to-digital conversion.

Wiring

A big part of this project is the form factor. By keeping the components to a minimum the project can fit on two 1” square proto boards. These boards are small enough to fit fairly well on the average male wrist (assuming that my wrist is representative of the average male).  One board will be used for the battery holder and the other will be used for the remainder of the components. These two boards will be joined with a few jumper wires, allowing the boards to flex in the middle and better contour to the wrist.
Initially, I thought of putting the battery holder on the back of the protoboard and the components on the front. However, this felt too tall to wear comfortably. I’d recommend trying a few variations of component layouts before actually heating your soldering iron, just to ensure the best end result.
All the selected components are through-hole except for the CR2032 battery holder. The battery holder is surface mount, but it was pretty easy to solder onto the protoboard. I choose to use a surface mount battery holder to keep the profile low and because it looks much cooler than the through-hole variant.
To set up the power board, first, solder a wire to the middle of the board. This solder blob should be as small as possible to reduce the amount of pressure on the other solder joints and will make the negative connection to our battery. The other solder joints can be made where shown below. These solder pads are used to attach the surface mount battery connector positive terminals.



The battery holder is then placed on the solder balls and heat is applied from the top of the battery holder tabs until the pre-soldered holes reflow around the battery holder. This is relatively easy to do, but it requires a bit of patience. Be mindful about not touching the battery holder when you are soldering it down. It is an excellent heat sink and gets hot quick!



The other board will hold the fun part of the project: the ATtiny85 microcontroller, MMA7341LC accelerometer, vibration motor, and power switch. I found that the below layout worked well for this project. I left a row of solder holes free on the right-hand side for the attachment of the strap. You could attach a traditonal watch strap with to this project but I thought it would be fun to solder my own together with some common electrical components.



Most vibration motors have an adhesive backing so installation is a breeze. Measure twice, pull off the sticker once and away you go. I used the power and ground wires from the power board to attach the two protoboards together. This also allowed the activity tracker to flex in the middle and better conform to the wrist. Additional jumper wires can be used to ensure that there isn’t undue stress on the power cables.



This is how the project should look after being completely soldered together. It’s important to note that some protoboards have traces connecting adjacent solder pads. These are easily cut with a sharp knife but can lead to severe headaches if missed.



To make life easier, I checked my circuit as I soldered to ensure everything was wired up correctly. At this point, the activity tracker is complete. If you are going to be carrying the activity tracker in your pocket or strapped to a bag, it is ready to use.

Bonus Step: Make a Strap

I am planning on wearing mine as a more “traditional” activity band so I decided to make a suitable strap.
I purchased a ribbon cable to use as a strap.  This can be soldered onto the previously unused pads on either edge of the protoboard.



After measuring the strap for my wrist I soldered a row of stackable headers to either side of the strap as a way to connect the two ends.



Be sure to measure correctly before you trim the wires so that the strap fits.  If you made it a bit short, or if multiple people will be wearing this activity tracker, a set of jumper wires can be used to extend the strap.

Program Flow

The idea behind the program is to notify the wearer if a predefined timer has run out. The program reads the accelerometer output signals, compares them to a threshold, and resets the timer if the threshold is exceeded. Below is a brief snippet of the code:

constlong maxAtRestMinutes = 15;
constlong maxAtRestSeconds = (maxAtRestMinutes * 60);// This is the longest that the user can be at rest for (seconds)

int accelCenter = 1024/2; // 0g = Vdd/2
int thresholdHigh = 650;// This was determined to be a suitable level of activity (walking) by experimentation
int thresholdLow = (2 * accelCenter)-thresholdHigh; // thresholdHigh and Low are centered around 0g

void loop{
if(activityTimer > maxAtRest) // give reminder to get active
{
vibMotor();
activityTimer = 0;
}
if ( xVal < thresholdLow || xVal > thresholdHigh ||yVal < thresholdLow || yVal > thresholdHigh ||zVal < thresholdLow || zVal > thresholdHigh )
{
activityTimer = 0;
}


This code worked well but is not very energy efficient. To improve battery life, I put the microcontroller and accelerometer to sleep when I'm not checking the current acceleration. Both the ATtiny85 and MMA7341LC have low power modes to ensure that battery drainage is at a minimum. The ATtiny85 can be put to sleep for a predetermined amount of time, and the MMA7341LC falls asleep whenever Pin 7 is driven to logic low. This means that everything can be kept in a low power state unless the microcontroller is checking the acceleration data.
The program is asleep for the majority of the time but wakes up once every minute to monitor the accelerometer. While monitoring the accelerometer the program checks the acceleration values once a second for 5 seconds.
The acceleration values are compared to a pre-set activity threshold. If they exceed this threshold the activity timer is reset. When the activity timer expires, the vibration motor is activated to prompt the user to be more active.
The threshold values were determined by trial and error while performing various daily activities. For simplicity's sake, all acceleration values were compared against the same threshold. I think this is inefficient and could be improved in the future. 

 

Conclusion

I think the project turned out really well! It helps to keep me honest about my activity levels—and has a certain charm. I think that there are a lot of options to improve both the hardware and software side of this project. Let me know what you'd do to make this project your own in the comments below!




Viewing all 1099 articles
Browse latest View live