Quantcast
Channel: LEKULE
Viewing all 1099 articles
Browse latest View live

When a plane loses pressure, here's what happens to your body

$
0
0
121 passengers found out recently when a Jet Airways flight crew forgot to pressurize the cabin.



Most plane trips begin with a checklist. Socks? Check. Underwear? Check. Sweatshirt just in case it gets a little bit chilly one night? Check. As crucial as your undies are to a successful trip, though, a far more important checklist goes on while you’re complaining about how small the seats are getting. Flight crews do a checklist before the plane takes off to ensure they don’t forget to do something like, you know, pressurize the cabin.
But the crew on Jet Airways flight 9W 697 managed to miss that step recently on their way from Mumbai to Jaipur. The result? A plane-ful of panicking passengers, many of whom awoke from naps to discover intense pain in their ears, bleeding from their ears and noses, and a heck of a lot of confusion.
The airline itself hasn’t released much more than a vague statement, but Lalit Gupta, the deputy director general of the Directorate General of Civil Aviation told the Hindustan Times that “The 9W 697 Mumbai-Jaipur flight was turned back to Mumbai after take off as, during the climb, crew forgot to select switch to maintain cabin pressure.”

First things first: What does it mean to pressurize an airplane?

Air at higher altitudes is under less pressure and is therefore harder to inhale—the molecules of oxygen are literally farther apart. This is why when you visit a city like Denver, which is about a mile above sea level, you may notice that you tire more easily: you’re getting less oxygen to your muscles and brain.

Planes flying above 10,000 feet need to pressurize the cabin so that they can maintain a high enough oxygen level for everyone onboard to function, though they don’t actually pressurize it to sea-level pressures (it’s usually more like the 8,000 ft mark). A normally functioning plane, once sealed off at the gate, will automatically raise the pressure inside smoothly as the pressure outside drops, so that ideally you don’t notice it much. The same thing happens in reverse at the other end to bring everything back to normal.

Why is an unpressurized plane dangerous?

This loss of pressure seems to have popped some of the small blood vessels in people’s noses and ears, or perhaps even ruptured some eardrums—none of that is unheard of for a depressurization event. Your body is really only designed to work within a small range of pressures pretty close to sea level, and when you go outside that zone delicate areas get damaged. Fluid and gas are both far more susceptible to pressure than solid flesh, so these bits go first. You actually experience a minor version of this in airplanes or even in fast-moving elevators: your ears pop. There’s a little membranous vessel called the Eustachian tube inside your ear that rebalances the pressure between the atmosphere and your inner ear (there’s a little pocket of air in there). Changes in pressure can block the Eustachian tube, making a tiny, painful vacuum in that inner ear bubble. Chewing gum or even sucking on a hard candy alleviates that issue because the act of swallowing opens up your Eustachian tube.
All this is to say that your ears are very sensitive to pressure changes, so bleeding from the ears might not be all that surprising given the lack of pressure on this aircraft. Similarly, delicate blood vessels can rupture in the nose.

How does this even happen?

A lot of the time when a plane suddenly depressurizes, it’s because some kind of damage has occurred and the airtight seal keeping the pressure inside the aircraft is broken. Large aircraft are well equipped to handle this, because air masks drop and provide enough oxygen to for the pilots to get the plane down to 10,000 feet or below, where the air is dense enough to keep everyone alive and functioning. It’s crucial that this drop happen immediately, because perhaps the most dangerous part of losing pressure—despite the dramatic explosions you’ve seen in the movies—is hypoxia.
Hypoxia, or a lack of oxygen, can be remarkably hard to recognize but also completely destroys your ability to function. And it happens really fast. The Federal Aviation Administration says that “The ability to take corrective and protective action is lost in 20 to 30 minutes at 18,000 feet and 5 to 12 minutes at 20,000 feet, followed soon thereafter by unconsciousness.” Many commercial planes fly well above that, at around 35,000 feet, and at that altitude, you have 30 seconds to a minute of what’s called “time of useful consciousness”—the time in which you’re capable of making decisions like “should I put this oxygen mask on my face?” That’s why passengers are told to put their own masks on before helping their children; it sounds like a heartless instruction, but parents who put kids first might not have the wherewithal to save themselves once they’re done.
It’s also worthing noting that the FAA also notes that the effects of hypoxia can be tough to recognize, especially if they come on gradually. So if you maybe forgot to pressurize the plane and the level of oxygen is slowly dropping, it might be quite hard to tell when you or your co-pilot might be succumbing to hypoxia.
Those effects? We’ll let the FAA take this one again: “judgment, memory, alertness, coordination and ability to make calculations are impaired, and headache, drowsiness, dizziness and either a sense of well-being (euphoria) or belligerence occur.”
Because the effects are so severely impairing yet hard to recognize, many pilots—especially those in the military—go through training to experience what it’s like themselves and see it in their coworkers. Basically, a group goes into a hypobaric chamber with oxygen masks on, then one person takes theirs off and is assigned basic tasks to show at what point they stop being able to think properly. You can see it for yourself in this video:
That pilot loses the ability to even tell you what card he’s looking at—much less tell you there’s a problem or fly a freaking airplane—in just a few minutes.
This is a massive problem if hypoxia occurs before pilots are able to figure out what’s going on. The worst case scenario is something like what happened to Helio Airways flight 522 from Cyprus to Athens in 2005. Flight crew members from the previous trip noted an issue with one of the door seals, and in order to fix the problem an engineer had to switch the pressurization system to manual. He fixed the door, but forgot to switch the system back to auto, and flight crews then failed to correct it during three separate checks of the plane—and pilots managed to misidentify the literal warning signs multiple times. Eventually, they radioed to ask for help with the equipment cooling system and the very engineer who had flipped the switch to manual asked the pilots to confirm that the pressurization system was set to auto.
Unfortunately, the pilots were already in the early stages of hypoxia and kept talking about the cooling system, not understanding what was happening. Almost everyone onboard eventually lost consciousness and the plane continued on autopilot to Athens, where it entered a holding pattern. Not long after, the engines blew as they ran out of fuel and the plane crashed into hills outside Athens, killing everyone on board.
Most of the time, though, decompression is survivable. One Southwest Airlines flight got a 17-inch hole in the fuselage while flying at 34,000 feet and absolutely no one died. Two years later another Southwest flight got a 60-inch long gash in it where a joint failed, and again, everyone made it to their destination alive.
So if you’re ever on a flight that does lose pressure, rest assured that you will most likely be fine. You’ll put on your air mask (remember: put your own on before helping others) and you’ll freak out a bit while the plane descends rapidly. Then the trained professionals will, most likely, get the aircraft to the ground in one piece.

The Convergence of Automotive Trends: A Conversation with GaN Systems CEO Jim Witham

$
0
0
The automotive landscape is changing rapidly. GaN Systems CEO Jim Witham spoke with AAC about the unique challenges of efficient power in automotive applications and how autonomous vehicles represent a convergence of major trends across the industry.
The term "mobility" has taken on a new significance in recent months and years, extending beyond the concept of moving from point A to point B. From the halls of CES where "mobility as a service" reigned supreme to the multi-billion dollar investments in developing autonomous vehicles over the past several years, mobility has become an important concept to the electronics industry.
While this increased attention on mobility has naturally resulted in a more intense focus on automotive applications, it turns out that many major trends in the industry are converging in this sphere, including data centers, renewables, and new methods of charging electric systems.

Toyota president Akio Toyoda at unveiling the e-Palette concept for mobility-as-a-service at CES 2018

GaN Systems CEO Jim Witham believes that a big portion of the evolution of electric vehicles and autonomous vehicles (EVs and AVs) will hinge on higher levels of efficiency. From his perspective, achieving this efficiency will require the use of GaN, or gallium nitride, a semiconductor that's been posed as an alternative to silicon and can allow for smaller, lighter power systems. 
In a recent trio of videos, Witham discusses the concept of mobility, EVs, and AVs with Uwe Higgin of BMW i Ventures, a venture capitalist group for incubating innovative technologies, a program in which GaN Systems participates.
Witham spoke with AAC to expand that conversation and lay out how EVs and AVs are more than just a particularly progressive portion of the automotive industry. EVs and AVs represent a new era in electronics, weaving together several industry trends that have been developing for years—an era in which Witham believes GaN will be crucial.

Jim Witham, CEO of GaN System

The Power Electronics Revolution

"We're in the midst of a revolution happening in power electronics," Witham begins. From the rise of the internet and the availability of affordable memory to the surge in mobile computing devices, the technological landscape has been changing quickly (and sometimes drastically) year over year. According to Witham, the two biggest areas that have grown due to these changes have been the development of data centers (particularly due to a rise online activity spurred by an expanding IoT) and the electric vehicle.
For someone like Witham, whose business deals intensively with power systems and their efficiencies, it's not hard to see how these particular trends are tied closely to power. But, he says, the scope of the issue of efficiency goes beyond the challenges of creating an effective power source. "[Efficiency] means you've got to be efficient with your energy and how it's used. But it also means you've got to be efficient with your materials—your copper and your aluminum and your printed circuit boards. You want to make them as small as possible so you minimize the amounts of materials. When you get down to it, it's all about circuits."
GaN Systems has built a reputation off of the idea that gallium nitride transistors are going to be instrumental in increasing efficiencies for power applications. Over the last several years, GaN has been gaining traction and, in some ways, threatening silicon's dominance in the industry. Especially in the last year or two, it seems that major corporations like Texas Instruments, Analog Devices, Dialog Semiconductor, Qualcomm, and others have been investing in this alternative semiconductor and releasing GaN components and modules.
Current applications for GaN are wide-reaching. It's been noted for its results as a semiconductor used for RF applications, including RF power amplifiers. It's also made possible things like 99% efficient inverter power stage designs.

Witham believes that GaN will be key for many upcoming trends and applications. "...with GaN transistors, you can make things that are four times smaller, four times lighter in weight, four times less energy as heat. They can make the overall system cost cheaper because of that. It's really a driving force for providing the vision for those things that are changing in our society."

An example of a GaN transistor. Image courtesy of GaN Systems.

One of those society-changing concepts is how the automotive industry is rapidly evolving, especially in its move towards electrification and autonomous vehicles.

The Changing Landscape of Automotive Applications

In a general sense, vehicles are getting more high-tech. This is perhaps an unsurprising development as even refrigerators and toasters are becoming advanced enough to require cybersecurity measures. Tech companies, then, often view automotive as an important vertical for product development.
Witham says that GaN Systems, for their part, has viewed automotive as a basic building block in their company's focus for years: "We segment the market into four areas. One is consumer, one is data center, one is industrial, and the fourth is automotive or transportation." Transportation as a concept can, of course, include "everything from satellites to scooters, drones, forklifts, e-bikes. You name it. If it moves, people want to have lightweight power electronics in it—so it really cuts across all the venues, not just cars."
The real focus for transportation in recent years, however, is indisputably cars. Companies around the globe have been investing in technologies such as machine learning algorithms, longer-lived battery designs, and LiDAR sensors, all with automotive in mind. "You see it in places like the Consumer Electronics Show in Las Vegas, which used to be about TVs and computers but is now the biggest car show in the world."

A MobilEye display at CES 2018 shows off sensor innovations for automotive applications.

But while it may not seem extraordinary for a semiconductor company to consider automotive a major vertical, it's worth considering that automakers may not be quite as prepared to make technology a major part of their overall strategy. Automakers have largely enjoyed a relatively stable demand for their products. In recent years, however, Witham has noted that the tech industry has been placing increasing pressure on these automakers as technology companies inch further into the automotive space.
On one hand, more advanced technologies in cars have buoyed automakers with more attractive features to build into their products. On the other hand, the life cycle of the typical vehicle is much longer than that of the typical smartphone. With computing systems increasingly appearing in vehicles, it seems inevitable that industry-wide shifts will need to occur to allow automakers to keep up.
When asked if he thought the automakers will need to adjust lifecycles to keep pace with innovations in tech, Witham said, "I think they have to. I think they realize it. It has a lot to do with the non-car companies that are getting into this marketplace: the Googles and the Apples and the people like that who are used to these fast product cycles and fast turnarounds. They've flipped the whole model upside down. So the automotive companies know they have to react or they could out-innovated by the other guys. If they're on a five-year design cycle and the other guys are on a three-year design cycle...Wow, that's two and a half models out in the same time period. That's not good."

"The automotive companies know they have to react or they could be out-innovated by the other guys."

Of course, it isn't a simple matter of technology easily slipping into the automotive realm. Automotive applications face several unique challenges, including rapidly changing usability expectations and starkly unchanging safety expectations.

More Tech, Same Form Factor: Expectations for Electric and Autonomous Vehicles

While it may not necessarily look it on the outside, a high-end car released today is markedly different from one released 20 years ago. The concept that a vehicle can connect wirelessly to a smartphone is no longer a novelty but rather an expectation. Now there may also be expectations for a backup camera with predictive guidance, lane sensing and correction, capacitive touch-enabled infotainment consoles, and a whole host of other integrations of technologies that are just becoming common in consumer items.
All of these "bells and whistles" require power, making efficiency and power conversion more important than ever.
"It's only going to get more so, right?" Witham quips. "Those [features] are all kind of new things that you have got to charge. You've got to provide power for those things. They all have to go into a car but we don't want our power shape to grow. We have a basic shape. There's an SUV shape and there's a sedan shape and there's an economy car shape. They define what's put on the road. You can't grow that. But all the other stuff has to get smaller in order to be able to put more in there. I don't ever get to the point of "it's small enough" in the automotive world. We always want to strive for an extra cubic inch because that can be used somewhere else."

BMW's ConnectedDrive is an example of inter-device connectivity. Image used courtesy of BMW.

Despite the fact that cars provide such large form factors, space is still prime real estate when it comes to the size of electronics. At present, this sets automotive apart from other applications where "small enough" is still an important concept. 
"A big marketplaces for us is solar renewable energy. It's kind of like a panel is a certain size and, once the electronics get small enough, it really doesn't matter anymore because you can stick them under the panel and so added shrinkage doesn't really help. Not so in the automotive industry. Keep striving for zero."

Safety and Reliability

So what sets automotive apart from other applications when it specifically comes to power?
"Probably the biggest difference is the reliability and quality," says Witham. "When we make and design transistors for the automotive market, we make things that can handle higher temperatures, more extreme cycles, higher voltages, higher current. We do this because we've supplied to the automotive industry before and we know that when the components last longer, then the subsystems last longer and the cars last longer themselves. When I first drove a car, cars broke a lot more than they do today. I'm pretty impressed with how mine, my family's, or my friends' cars last and how little they need to be repaired today compared to the old days. It's because, bottom line, the components are built to be really high quality and more reliable. We see that in spades with the automotive industry and with the amounts of testing we do and the amount of scrutiny we get into with our customers."
Reliability is also a major pain point when it comes to both AVs and EVs. 
"For EVs, it's just standard automotive. People expect their cars to last for ten years and drive hundreds of thousands of miles and you've got to continue to do that with an EV just like you did with an internal combustion engine," says Witham.
But he doesn't necessarily see this expectation as a negative.
"There's always a drive for smaller, lighter, and more efficient. We're going to push up the efficiency higher and higher. By driving that efficiency up, you get really huge changes for the car because you can take that energy and instead of wasting it as heat, you can drive the car further. Or you can take batteries out of the car because batteries are heavy and expensive... When you don't burn up the energy as heat and you utilize it usefully, then you can also reduce the cooling system. If we can take three quarters of the wasted heat and put that back into useful energy and only have one quarter left, we can make the cooling system one quarter the size and that has these added benefits. So driving up the efficiency higher and higher gives us longer-distance cars, fewer batteries in the car, and smaller cooling systems which all make for a better vehicle."

Tying Trends Together for Sustainable Autonomous Vehicles

In March, GaN Systems showcased several demos at APEC, including applications related to renewables (such as solar), EVs, and data centers. As it turns out, all of these topics are important for autonomous vehicles.
Those same companies that are investigating entering the automotive sphere have been vastly expanding their data centers. The amount of data processing and storage necessary to support millions of autonomous vehicles is staggering. In broad terms, data must be gathered from sensors and systems in a vehicle, processed, run through decision-making programs, and then fed back to the car's systems to execute actions. While some of this processing must necessarily be done in the car, itself, sending data back to company data centers is important for exposing machine learning algorithms to the datasets that will allow them to make better driving decisions.
In 2016, Intel claimed at CES that a single autonomous vehicle could require four terabytes of data per day. This makes data center efficiency important to a sustainable future for autonomous vehicles; according to Witham, companies like Google but also automakers like BMW are expanding their data centers. 
But efficiency issues also include conversations about renewable resources.
"I had an 'aha' moment when I was visiting a couple of other companies and we were talking about CO2emissions," says Witham. "EVs don't make any sense at all if you're going to make your electricity using coal and oil. You've got to have wind and solar and hydro in order to make the electricity to power those electric vehicles, to power those data centers, and to crunch the data. If you tie all of those together, it really makes a great story of the future. If you don't and any one of those breaks down, the whole vision breaks down. These are all interrelated concepts and I feel like GaN Systems is doing its little part to make all the pieces happen and make the step forward for mankind in the power industry."



These power issues can have far-reaching effects on everyday life. Witham thinks of mobility as an important issue for social responsibility. A huge portion of our lives are spent in transit, he argues, and the repercussions of making the power systems behind transportation more efficient and available are huge.
"I think how we use our cars can be socially responsible. There's this huge gain to be had for society. We make the transistors that make not only the electric vehicle go but also a lot of the sensors and compute power that go with an autonomous vehicle. We can play a pretty big role in making that happen and making the assets more useful and making the time more useful to the people and the goods that are in front of us."
Jim and Uwe have been releasing a series of articles on the topics of EVs and autonomous vehicles to accompany their video trio. You can see the most recent article here.
You can watch the first part of Jim's conversation with Uwe below:

Just one autonomous car will use 4,000 GB of data/day

$
0
0

Self-driving cars will soon create significantly more data than people—3 billion people’s worth of data, according to Intel

 

One autonomous car will use 4,000 GB of data/day
Intel
Two real-life, practical, semi-autonomous vehicle launches next year are an indication that the self-driving car is really happening. 

Audi is expected to make its up-to-35-mph hands-free driving system available late next year in some 2018 vehicles. 

And Volvo will start testing Drive Me, an autopilot that will introduce 100 Swedish XC90 owners to autonomous driving, according to an Automotive News supplement produced for the Los Angeles Auto Show last month. 

Two mega-strides forward. But if you’re impatient and wondering why it’s taking so long for car makers to offer full autonomy, as in eye-free driving, one clue is in the data. The amounts of datasets that need to be produced and then shared in real time to make it all work are absolutely staggering. 

Vehicles will generate and consume roughly 40 terabytes of data for every eight hours of driving, according to Intel CEO Brian Krzanich, speaking at the auto show’s technology pavilion, Automobility.
There is a “flood of data that’s coming,” he told the automotive industry professionals. And it’s going to be significantly more than the amount of data that the average person generates today. 

The averagely driven car will churn out 4,000 GB of data per day, he says. And that's just for one hour of driving a day. One can compare that to an average person’s video, chat and other internet use, which Krzanich says is about 650 MB per day and will escalate to 1.5 GB per day, or essentially double, by 2020.

Why so much data?

One reason for the car’s appetite is the hundreds of on-vehicle sensors. Cameras alone will generate 20 to 40 Mbps, and the radar will generate between 10 and 100 Kbps, Intel says. 

“Each car driving on the road will generate about as much data as about 3,000 people,” Krzanich says. And just a million autonomous cars will generate 3 billion people’s worth of data, he says.
Maps are just one example of the incoming data a car will need, and those maps won’t be a one-time Google map download, as one can perform now. They will have to be extremely detailed and timely—down to the nearest inch. They will be used for lane control and road hazards, among other things, so they will need to be continuously updated. 

“You’re going to have to have data as much as any other kind of propulsion,” Krzanich says.
Krzanich splits the continually changing intelligence into three data sets:
The car will have to learn about such things as cones in the road and other hazards, which Krzanich calls technical data. 

There will also be societal data, also called crowd-sourced data. It includes an automatic version of platforms such as Waze, for example. Waze is a community-based traffic awareness app that is heavily reliant on crowd-sourced traffic reports. 

Personal data will make up the third classification. That includes locations and stop time..
Without data, your self-driving car "will have to deal with the world in a very manual way,” Krzanich says. “Data is the next oil.”

A Quick Start to Embedded GUI Applications

$
0
0
This article discusses how to leverage a graphics library with a tightly-coupled graphics toolset to expedite GUI development, introducing basics, libraries, tool integration, and the MPLAB Harmony Graphics Composer.
Intuitive and impactful graphics can add value and help differentiate a product in an increasingly competitive market. Advancements in smartphone technology have set a high bar for embedded graphics. Users expect an intuitive, touch-enabled graphical user interface (GUI) with rich colors that use crisp industrial design aesthetics. Many talented application designers are tasked with creating such GUIs without adequate experience, knowledge, or training. As with any other task, inexperience leads to inefficiency and can impact time to market, ultimately hitting bottom-line productivity. 

Users expect smartphone-like touch control on most modern devices, from car infotainment systems to smartwatches.
Figure 1. Users expect smartphone-like touch control on most modern devices, from car infotainment systems to smartwatches.

Embedded Graphic Design Basics

The most basic element in graphics is the pixel. Pixels are points of color used to compose more complex structures such as geometric shapes, images, and text.  
Geometric shapes such as rectangles and circles are often used in GUI design. To fill the background with a color is as simple as drawing a large rectangle. Circles and curves give the design a modern quality by subtly incorporating the analog nature of the real world to the harsh contrast of the digital world.

A high-quality GUI design is achievable with a combination of text, images and judicious use of geometric shapes such as circles and rectangles.
Figure 2. A high-quality GUI design is achievable with a combination of text, images and judicious use of geometric shapes such as circles and rectangles.

For more complex visual aesthetics or artistic consistency, images can be used instead. Images are created from external sources and are captured by a digital camera or created by artists digitally. 
Text is necessary to communicate the function of the GUI. Text can be characters and symbols of any language. Each character in a computer font is called a glyph. To manage style, uniformity, and scaling, glyphs are packaged into digital data sets known as fonts.

Using Images and Fonts

Typically, images and fonts are integrated into the code of the GUI application as binary data blocks.  They are often referred to as assets. Depending on how the assets are packaged, they place different demands on system resources.
In the embedded space – where system performance, memory, and storage are all limited resources — finding a balance between all three is paramount. Geometric shapes are procedurally generated, requiring comparably fewer resources than font and image assets.

Font assets are often computer font packages that are preprocessed on a development desktop computer into a set of bitmap images, each being a glyph. This is preferred over vector fonts. Bitmap fonts require much fewer calculations from the CPU to draw at runtime, but are non-scalable, requiring a separate set of glyphs for each size. Storage size can be managed by filtering out glyphs from the source font package that are not needed for the GUI design.

Image assets also have similar challenges. While images can be compressed into standardized formats such as JPEG or PNG for significant storage savings, decompressing large images takes time and/or memory. Given the limited resources available at the microcontroller level, the time and/or memory requirements are almost always longer or more than ideal. Compression techniques such as run-length encoding (RLE), which offers a good tradeoff between storage savings and runtime performance, are good alternatives. However, while image files with standardized format can be directly integrated as image assets into the GUI application, image assets using RLE need to be converted from a source image with the help of a tool from the development desktop.

Graphics Library and Tightly-Coupled Tool Integration

The essence of GUI design is grouping and drawing geometric shapes, images and text in an organized fashion. To support management of assets and rendering of the design in an application developed from scratch requires a significant effort investment. As an alternative, using a framework of specialized code designed to manage the various graphics elements that can be ported from one application to another is more practical. Such a framework is known as a graphics library.
Most graphics libraries have application programming interfaces (API) that often require a steep learning curve. To offset this, a What You See Is What You Get (WYSIWYG) designer tool can help. 
In combination with a WYSIWG designer tools, users can quickly integrate GUI designs into their application by using an Asset Converter to convert fonts and images to assets, as well as a Code Generator tool with knowledge of the graphics library’s APIs.

The combination of a WYSIWYG designer tool, Asset Converter and Code Generator eases the learning curve of developing a GUI application. To maximize development efficiency, the tight coupling of these tools into a single development environment is essential.
Figure 3. The combination of a WYSIWYG designer tool, Asset Converter, and Code Generator eases the learning curve of developing a GUI application. To maximize development efficiency, the tight coupling of these tools into a single development environment is essential.

Finally, it is important to point out the design process itself. The GUI design never comes together in a single design pass. It is often an iterative process with a lot of refinements along the way. This process can consume a large percentage of the total software development time. To achieve maximum efficiency, it is important to minimize the time required for each iterative cycle, and a tightly-integrated toolset can save a lot of development time. One such tool is the MPLAB Harmony Graphics Composer (MHGC).

The MPLAB Harmony Graphics Composer provides WYSIWIG design capability, asset management and resource management.
Figure 4. The MPLAB Harmony Graphics Composer provides WYSIWIG design capability, asset management, and resource management. 

The MHGC provides an integrated tightly-coupled development environment with the ability to visually design the GUI, convert assets, manage the MCU’s resource, generate code and assemble the files into a project in the Integrated Development Environment (IDE). With a tool such as MHGC, one can easily put graphics on the display in minutes without handwriting a single line of code. Having the ability to adjust the graphics design and quickly generate deployable code cuts development time by multiple orders of magnitude.

Conclusion

In summary, we have discussed a way to quickly add GUI design to an application. Using a graphics library and an integrated graphics tool suite can make what seems to be a herculean effort manageable. Ultimately, the use of an integrated toolset in a full production cycle can provide huge savings in development efforts.

MEMS Sensors for Automotive Applications: A Glance at STMicroelectronics’ 6-Axis Inertial Module

$
0
0
While some car enthusiasts resist the integration of technology into automobiles, the industry is marching on towards more and more electronic components in cars. STMicroelectronics has announced their latest automotive-grade MEMS, the ASM330LHH, which aids in equipping cars with sensor technology.
Automotive vehicles have experienced some extraordinary changes over the last few decades. The first cars were mostly mechanical with basic electrical systems that provided power for spark plugs and headlights. As technology progressed, cars also found themselves being fitted with the latest gadgets and gizmos including radios, electric windows, wipers, alarms, (and even mini-fridges for champagne). GPS has made printed maps redundant and, in recent years, capacitive touch screens in infotainment centers have changed the way we interact with our cars' maintenance interfaces and comfort features.
More pressing technological advancements have also been made to vehicles for safety purposes, including better airbag deployment and even lane correction. The surge in these sensor-dependent features has challenged engineers across the industry to develop increasingly more accurate sensors with automotive applications in mind. Here's a look at a couple of MEMS sensors released specifically for automotive use.

The New Generation of Automotive Sensors

STMicroelectronics announced on July 9th a new automotive-grade MEMS sensor, the ASM330LHH, which aims to meet the current sensor demands from modern cars. The ASM330LHH is a system-in-package which integrates a 3D digital accelerometer, gyroscope, and hardware to address automotive in non-safety applications. The sensor is AEC-Q100 qualified, has an extended temperature range from -40 to 105 degrees C, and an embedded compensator for temperature adjustments.
The sensor also integrates the following:
  • Accelerometer with a user-selectable full scale up to ±16g
  • Gyroscopic range from ±125 to ±4000 dps
  • SPI and I2C interface
  • Six-channel synchronized output (important in dead reckoning applications)
  • Programmable interrupts
  • 3K FIFO

Image courtesy of STMicroelectronics.

The sensor is incredibly small measuring just 2.5mm x 3mm x 0.86mm, housed in an LGA package with 14 pins.
Typical applications for the sensor include:
  • Dead reckoning
  • Telematics and eTolling
  • Anti-theft systems
  • Impact detection
  • Crash reconstruction
  • Motion-activated related functions
  • Driving comfort (i.e., automatic seat adjustments)
  • Vibration monitoring (i.e., suspension quality)

Image courtesy of STMicroelectronics.

A Look at Other Automotive-Focused MEMS Sensors

STMicroelectronics are not the only ones involved with automotive MEMS devices; Bosch also has several devices to address this application. 
The two main sensors produced by Bosch are the SMI130 and the SMI700. The SMI130 sensor is a sensor with a small footprint and low power consumption. The sensor is also AEC-Q100 qualified and is targeted at applications including navigation (for dead reckoning), vehicle dynamics logging, and car alarms.

Image courtesy of Bosch.

Its sister sensor, the SMI700, has vibration resistance and offset temperature stability. Unlike the SMI130, the SMI700 is specifically designed for use with ESP and premium vehicle dynamics control (VDC) functions such as hill-hold control, active front steering, and adaptive cruise control. The SMI700 sensor can measure data regarding rotation around the vertical axis and can deliver data about the lateral and longitudinal acceleration.
The acceleration sensor in the SMI700 uses a movable comb-like seismic mass that is suspended from silicon spring bars and fixed counter-electrodes. When external forces act on the mass (such as a sudden change in velocity), the mass deflects from its resting position which results in a change in capacitance.

Image courtesy of Bosch.
 
Characteristic
SMI130 
SMI700
Package
LGA16
BGA34
Sensing Axes
X,Y,Z (Ω) and X, Y, Z (a)
X (Ω) and Y, Z (a)
Range
Up to ±2000 (Ω) and ±16 g (a)
Up to ±300 (Ω) and ±5 g (a)
Operating temperature
-40°C to 85°C
-40 to 125°C
Supply Voltage
3.3V
3.3V or 5V
Interface
SPI, I2C
SPI, PSI5, CAN
Table comparing the SMI130 and the SMI700. Image taken from Bosch.



As cars evolve, so must the components required to facilitate their increasingly complex functionalities. What other automotive-geared components have caught your eye?

Featured image used courtesy of STMicroelectronics.

What Is Google Android?

$
0
0


What is Android? We're not talking about robots. In this case, we're talking about smartphones. Android is a popular, Linux-based mobile phone operating system developed by Google. The Android operating system (OS) powers phones, watches, and even car stereos. Let's take a closer look and learn what Android really is. 

Android Open-Source Project 

Android is a widely-adopted open-source project. Google actively develops the Android platform but gives a portion of it for free to hardware manufacturers and phone carriers who want to use Android on their devices. Google only charges manufacturers if they also install the Google apps portion of the OS. Many (but not all) major devices that use Android also opt for the Google apps portion of the service. One notable exception is Amazon. Although Kindle Fire tablets use Android, they do not use the Google portions, and Amazon maintains a separate Android app store. 

Beyond the Phone

Android powers phones and tablets, but Samsung has experimented with Android interfaces on non-phone electronics like cameras and even refrigerators. TheAndroid TV is a gaming/streaming platform that uses Android. Parrot even makes a digital photo frame and a car stereo system with Android. Some devices customize the open-source Android without the Google apps, so you may or may not recognize Android when you see it. 

Open Handset Alliance

Google formed a group of hardware, software, and telecommunication companies called the Open Handset Alliance with the goal of contributing to Android development. Most members also have the goal of making money from Android, either by selling phones, phone service or mobile applications.

Google Play (Android Market)

Anyone can download the SDK (software development kit) and write applications for Android phones and start developing for the Google Play store. Developers who sell apps on the Google Play market are charged about 30% of their sales price in fees that go to maintain the Google Play market. (A fee model is pretty typical for app distribution markets.)
Some devices do not include support for Google Play and may use an alternative market. Kindles use Amazon's own app market, which means Amazon makes the money off of any app sales. 

Service Providers

The iPhone has been very popular, but when it was first introduced, it was exclusive to AT&T. Android is an open platform. Many carriers can potentially offer Android-powered phones, although device manufacturers might have an exclusive agreement with a carrier. This flexibility allowed Android to grow incredibly quickly as a platform. 

Google Services

Because Google developed Android, it comes with a lot of Google app services installed right out of the box. Gmail, Google Calendar, Google Maps, and Google Now are all pre-installed on most Android phones. However, because Android can be modified, carriers can choose to change this. Verizon Wireless, for instance, has modified some Android phones to use Bing as the default search engine. You can also remove a Gmail account on your own. 

Touchscreen

Android supports a touch screen and is difficult to use without one. You can use a trackball for some navigation, but nearly everything is done through touch. Android also supports multi-touch gestures such as pinch-to-zoom. That said, Android is flexible enough that it could potentially support other input methods, such as joysticks (for the Android TV) or physical keyboards. 
The soft keyboard (onscreen keyboard) in recent versions of Android supports either tapping keys individually or dragging between letters to spell out words. Android then guesses what you mean and auto-completes the word. This drag-style interaction may seem slower at first, but experienced users find it much faster than tap-tap-tapping messages. 

Fragmentation

One frequent criticism of Android is that it's a fragmented platform. Parrot's photo frame, for example, bore absolutely no resemblance to an Android phone. Had the developers not told me they'd used Android, I'd have never known. Phone carriers like Motorola, HTC, LG, Sony, and Samsung have added their own user interfaces to Android and have no intentions to stop. They feel it distinguishes their brand, although developers often express their frustration at having to support so many variations.

The Bottom Line

Android is an exciting platform for consumers and developers. It is the philosophical opposite of the iPhone in many ways. Where the iPhone tries to create the best user experience by restricting hardware and software standards, Android tries to ensure it by opening up as much of the operating system as possible.
This is both good and bad. Fragmented versions of Android may provide a unique user experience, but they also mean fewer users per variation. That means it's harder to support for app developers, accessory makers, and technology writers (ahem). Because each Android upgrade must be modified for the specific hardware and user interface upgrades of each device, that also means it takes longer for modifiedAndroid phones to receive updates.
Fragmentation issues aside, Android is a robust platform that boasts some of the fastest and most amazing phones and tablets on the market.

What Is Google Play?

$
0
0

Google Play is the one-stop-shop for Android apps, games, music, movie rentals and purchases, and e-books. On Android devices, the entire Google Play Store can be accessed through the Play Store app. Standard apps appear in the Android system tray, but Play Games, Play Music, Play Books, Play Movies & TV, and Play Newsstand are all libraries of downloadable content. Each has separate player apps that allow you to access your content. That means you can also view Play Music, Play Books, and Play Movies on laptops and non-Android smartphones.

Note: The Google Play store (and all of the information covered in this article) should work no matter who made your Android phone: Samsung, Google, Huawei, Xiaomi, etc.

The Google Store and Smartphones, Watches, Chromecasts, and Nest Thermostats

Google Play previously offered a devices tab in the Play Store, but device transactions are not the same as software transactions. Devices require transactions like shipping, customer support, and potential returns. So, as Google's device offerings expanded, Google split the devices into a separate location called the Google Store. Now, Google Play is strictly for downloadable apps and content.

Chrome and Chromebook Apps

In addition to devices, Chrome apps have their own store in the Chrome Web Store. This is where you find apps that run on both the Chrome web browser and the Chromebook. The company split Chrome-related apps away from the Play Store because those apps are strictly for Chrome-based products. However, you can still use the Google Play Store in Chrome environments.

Previously Known as Android Market 

Prior to March 2012, the markets were more siloed. The Android Market handled app content, and Google Music and Google Books handled books and music. YouTube was the source for movies (and it is still a location for your movie purchases and rentals. You can access your library in both locations).
Android Market used to be as simple as that. An Android app store. When it was the only Android app store, this was pretty straightforward. Amazon, Sony, Samsung, and just about every single phone and Android tablet maker began offering separate app stores.

Why Google Play?

The word play implies that the store now only sells games. The logo points to a different reason. The new Google Play logo is a triangle in the familiar play button on videos. We're still not sure how a book plays, but we can see this as a combination of the content consumption definition of play and being playful in exploring what content is available.

Android Apps on Google Play

Google Play sells Android apps, available through the Home and Games section of the Play Store. Play Books, Play Music, Movies & TV, and Play Newsstand also have dedicated sections that are set to show top recommendations based on your previous downloads. In addition, there are links to quick navigation, like Top Charts. Categories, and Editor's Choice. And of course, Google-powered search capabilities make it easy to find anything you might be looking for.

Find Your Tunes in Google Play Music

The old Google Music logo has been retired for those who remember Google's original song storage locker. However, the Play Music store still works the same way as the old standalone Google Music product. The player operates like you're used to it working, the difference is you find it under the Music section of Google Play. If you're a Google Play customer, watch your email. Every once in a while, Google offers promotional free songs and albums.

Grab a Great Read from Google Play Books

Google Books used to be confusingly divided between book search and eBook purchases. Now, Google Books is not the same as the Books section of the Google Play Store. Google Books is an online database that contains a massive library of scanned books from the collections of public and academic libraries.

Google Play Books is an e-book distribution service where users can download and read or listen to e-books and audiobooks. If you had Google books before the change, your library is still there. It's a tab (Library) in the Play Books app now, and the app serves as your e-reader.

Binge Watching with Google Play Movies & TV

Your movie rentals are available both through the Google Play Movies & TV apps and through YouTube Purchases. This sometimes gives you some flexibility, as a lot of devices support YouTube. If you're playing a movie on a mobile device - say you're getting ready to fly somewhere and want to download a movie for watching on the plane, use Google Play Movies & TV. If you're watching from a computer or a device that supports YouTube but not Android, use YouTube.
You also have access to a wide range of television episodes from shows that appear on network and premium channels. Those work in the same way movies do, so the guidelines above apply.

ASD and IndustriAll Joint Statement on Civil Aeronautics Research

$
0
0


The upcoming months will be crucial for the future of the European Union, especially in light of the important debates related to the shaping of the next Multiannual Financial Framework.

Aeronautics is a global business based on excellence, innovative technology and high-level skills. With 160 Bn€ of revenues, over 550 000 direct employees and more than 1.5 million indirect jobs, the European Aeronautics industry is a key contributor to European economic performance and competitiveness. As such the sector generates high quality jobs within the European Union. This is partly the result of decades of high levels of public-private investments in Research and Technology (R&T) by Member States and the European Union, together with industry.

Over the past 40 years, Europe has succeeded in becoming a world leader in civil aeronautics including civil fixed wing and rotary wing aircrafts, engines, supply chains and Air Traffic Management technologies. Nonetheless, the competitive technology and business landscape for air transport is changing rapidly and the European aviation community has to quickly adapt to major game changers, these include the need to reduce the environmental footprint of civil aviation in the context of the rising demand for air transport, the reform of Air Traffic Management, the extraordinary levels of US state aid for its domestic Aeronautics industry, the emergence of new competitors such as China, and new technological challenges driven by digitalisation and electrification.

Despite all these challenges, Europe must remain a world leader in Aeronautics. Only as continuing to be a centre of excellence can Europe benefit its citizens reliant on the Aeronautics sector through high skilled quality jobs, environmental protection and safe, secure and convenient mobility. Furthermore, it is in the interest of the EU’s strategic autonomy that Europe remains a world leader in Aeronautics. EU-funded research programmes have demonstrated the effectiveness of the Public-Private Partnership (PPP) concept and Joint Technology Initiative (JTI) mechanism for aviation related topics. The two major European Aviation Research Programmes, Clean Sky (greener and more efficient aviation technologies) and SESAR (Air Traffic Management R&T) represent concrete success stories, acting as catalysts for the whole innovation chain in Europe and should therefore be kept and consolidated in FP9.

To achieve these challenging goals, the social partners for the aeronautics sector, ASD and industriAll Europe, jointly call on the European Institutions to protect EU research budgets for civil aeronautics and to strengthen the support with a higher budget for Aeronautics research in Framework Programme 9 (FP9). This is essential to keep this sought-after industry, and its thousands of high-quality jobs, here in Europe. Moreover, we should be ambitious and use this European support to expand and create new jobs in this high-tech and developing sector.

Any downturn in Aeronautics research programmes would lead to a rapid loss of competencies and would create a scientific and technological gap which would be almost impossible to catch up after several years of delayed investment.

The European Aeronautics sector is a European success story and we should be proud of what the sector has achieved. We call on the EU Institutions to commit to supporting the Aeronautics industry and the thousands of workers dependent on its success.


War and peace: Evolving challenges and strategies in the US military

$
0
0
Five experts describe the technological, environmental, and other disruptions that are changing the way the US armed forces manage conflicts and pursue peace initiatives.        


Particularly since the end of the Cold War, US armed forces have addressed a wide range of security concerns, some of which fall into noncombat categories—for instance, helping to manage the emerging effects of the Ebola virus in Liberia, or assisting in rescue and recovery efforts after hurricanes in the Caribbean and Puerto Rico.

 How has the US military adapted to this diversification in assignments, while still preparing for and conducting traditional combat operations? What critical strategic and technological questions do the US Department of Defense and other government agencies and leaders need to address? How can they introduce new ways of working within existing infrastructures?

This summer, a group of experts on military and strategic issues shared their perspectives on the factors that are reshaping the global security environment, as part of McKinsey’s Imagine Get-Together event—a recurring forum led by McKinsey’s Navjot Singh. For this edition of the forum, Singh was assisted by McKinsey experts Tucker Bailey and Heather Ichord. The speakers included several longtime leaders in the US Navy, a professor of conflict-resolution studies who has worked with US government agencies as well as international and local NGOs, and the leader of a government organization that aims to bring innovative technologies to the military quickly.
The presenters explored the following:
  • the continued complexities of the hard and soft tasks the US military is now being charged with, and how to find the balance between both in a resource- and time-constrained environment
  • conditions for peace and the importance of negotiation skills among today’s military leaders on the ground
  • the navy of the future and what it might look like
  • technological changes in today’s military
  • the opportunities emerging from industry–military partnerships
Make no mistake, the speakers concluded, the next-generation military is already here: IoT devices, sensors, and other connectivity tools are affecting the way military strategies are developed, communicated, and, in some cases, even executed. Diplomatic considerations are increasingly present at a tactical level on the battlefield. And the US government is revisiting how it funds and pursues innovation, exploring new models and metrics.

Admiral Eric Olson, US Navy (retired), former commander, US Special Operations Command at MacDill Air Force Base:


During my last year in command, I was struck by an image that many of you have seen: the composite satellite imagery of the world taken at night. It shows where the lights are on and where they are off. Certainly, up until the 9/11 terror attacks, our intuitive military thinking was that the most strategically important places on Earth must be where the lights are on at night—along a relatively narrow band of the mid-Northern Hemisphere. That’s where people live, societies develop, goods are produced, and money traverses networks. Then on 9/11, we were struck from a dark place; we were woefully unprepared to deal with and against people who live where the lights aren’t on at night.

Refining the flight path: Seven priorities for commercial aerospace leaders through 2020

$
0
0
Commercial aerospace companies need to focus on several areas, from the evolving industry structure to the development cost curve, to stay competitive.         


The commercial aerospace industry is poised for continued growth. Air transport passenger demand is expected to grow at around 4 percent a year in the next ten years. In the business aviation segment, the declining inventory of used aircraft and increased utilization rates are both reassuring signs. To protect and enhance market positions and maximize value in the long term, industry players should focus on seven priorities.

1. Manage the evolving industry structure

For more than two decades, concentrated structures in most major segments offered several advantages, including reduced exposure to cyclicality, a large production backlog, and overall stability. Recently, however, we see indications of disruption. In large commercial aircraft, for example, new entrants China and Russia are challenging the current duopoly, despite the recent partnership announcements in the regional segment. Similar shifts are happening in small aircraft propulsion, business jet avionics, and in-flight entertainment, thanks to joint ventures and expansion by players from other segments or industries. In addition, organic and inorganic vertical integration by major incumbents will also gradually change the structure in some segments.
This disruption could potentially affect incumbents, ushering in increased vulnerability to cycles, lower backlogs, increased emphasis on R&D, faster product development cycles, pricing pressure, and a shift in value pools. While the market may take decades to evolve, industry players seeking leadership positions must take concerted strategic actions early.

2. Navigate turbulent narrow-body skies

In the past decade, the commercial aircraft market has experienced record orders, resulting in production backlogs equal to more than six years for narrow-body aircraft—an all-time-high. This demand and new market entrants have led OEMs to increase announced production rates to a combined 1,700 narrow-bodies a year by 2025. Making reasonable assumptions for the use of these aircraft and the retirement pattern of the installed base, narrow-body capacity (measured in available seat miles) could grow 7 percent a year over the next decade, but passenger demand (measured in revenue passenger miles) is expected to grow at only around 4 to 5 percent a year. Stakeholders will likely address this potential imbalance through some combination of lower-than-announced production rates, earlier retirements of the installed base, lower aircraft utilization, and stimulation of passenger demand. Each of these factors has implications for industry participants across the value chain. No matter how the market clears, some players will experience significant disruption.

Aerospace players need to assess how the evolving narrow-body market would affect their business, identify the natural owner of volatility risk, monitor early indicators of market shifts, and plan for likely outcomes. In many cases, this means creating flexibility to react to market changes, both in manufacturing and service capacity and in contracts with suppliers and customers. Further, the shifting market could highlight alternate opportunities for additional growth.

3. Break the development cost curve


Over the past 20 years, new aircraft and system development costs have doubled in real terms—from hundreds of millions for major components to $5 billion for engines and $25 billion or more for aircraft. Together, airframe engineering and design, manufacturing labor, and tooling account for roughly 70 percent of total development costs. The increasing system complexity has caused the growth in development costs to outstrip productivity gains from labor and technological advances. This misalignment could become a significant drag on innovation and slow the introduction of new products: high development costs make the business case for new programs considerably challenging, to the point where players will be “betting the company” every time they greenlight a clean-sheet project.

4. Digitize the supply chain to improve efficiency

Supply chains, which account for the majority of total costs for OEMs and large suppliers, have become increasingly complex due to higher production rates, globally expanded networks, and numerous configurations, despite efforts to rationalize suppliers. A lack of real-time visibility can make supply chains inefficient, raising inventory levels and affecting service levels. Digitization can improve visibility, enhance performance management, and enable the proactive use of leading indicators to address issues before they emerge. There are three broad avenues to digitize the supply chain: links between discrete network nodes to increase visibility, asset intelligence to enable event recognition and translation to support more effective decision-making, and flexible automation to incorporate response mechanisms and remote movement. The optimal combination of these technologies will vary by company depending on the factors contributing to supply chain complexity and performance.

Companies should digitize their supply chain through targeted pilot projects to achieve quick wins and demonstrate value. They should then expand proven projects and communicate success stories across the organization to build momentum to scale the impact.

5. Prepare for the services war

Commercial and business aircraft services (maintenance and engineering, flight operations, and ground operations, among other areas) is a more than $300 billion market. It is forecast to grow at 4 percent a year over the next two decades, but most companies expect to aggressively expand their services business at two to three times that pace. Since not every company can gain share, what should industry leaders do to achieve their growth expectations? First, they need a clear understanding of “entitlement”—the annual and life-cycle value of aftermarket services and their addressable portion. Second, they need to determine their current and target share at the product, platform, segment, and customer levels. Third, to achieve the target share they must identify gaps in offerings, value proposition, pricing, and coverage. Last, companies should close these gaps through targeted initiatives, including identifying new sources of value beyond parts, repairs, and upgrades and improving operational performance (a chronic customer pain point). These actions need to be supported by the right operating model, incentive structure, and a robust performance management system.

Analytics and digital capabilities will differentiate winners from losers in this increasingly competitive services market. Successful companies will harness data and analytics to generate insights related to markets, customers, products, and processes in order to create a step change in commercial and operational performance and enable growth.

6. Win the war for talent


Talent is becoming an increasingly important battleground for aerospace players for three reasons: First, retirement rates combined with industry growth will require more than 25,000 new aerospace workers (including engineers, factory workers, and technicians) annually in the coming years. While automation could reduce this need—the McKinsey Global Institute estimates that more than two-thirds of maintenance and production activities in aerospace could eventually be automated—such a transformation will take many years to reach scale. Second, competition for talent is heating up, and legacy aerospace has lost some of its excitement for new graduates, especially compared with leading tech companies. Our analysis shows that today only about one of ten graduates from top aerospace engineering university programs chooses to work at major aerospace players. Last, the skills and capabilities required in aerospace in the coming years will be very different from previous decades. The shift in business models (for example, increased focus on services) and technology is dramatically changing the profile of the aerospace workforce to include capabilities such as data analytics, automation, and software.

Aerospace players must make talent a top priority and update their strategy to reflect the current needs of the industry. Further, companies need to review—and likely adjust—their value proposition to compete more effectively for the next generation of talent, especially with internet, high-tech, and consumer electronics players.

7. Find your place in the future of mobility

For the first time in decades, multiple new aircraft segments are poised to disrupt the industry. For example, small, electric vertical takeoff and landing (eVTOL) passenger vehicles could be a $100 billion market in the United States alone, with the potential to reach $500 billion globally. Given the focus on autonomy and electric propulsion, the smaller size of vehicles compared with traditional aircraft, and the goal of mass adoption, the value chain in new aircraft markets will look very different from traditional aerospace. Vehicles might get commoditized, and infrastructure (such as skyports and charging stations), air-traffic management, and consumer interface will likely play a more pronounced role. This prospect has attracted new competitors and potential partners—from ride-sharing platforms to automotive OEMs to software start-ups—with different and potentially disruptive approaches, including mass-production capabilities, agile and rapid iteration development, and deep analytics.

Every aerospace player must understand the potential structures that these new markets can take as well as the technological, regulatory, and social advances that will unlock their potential. Identifying emerging value pools and staking a claim early on, either alone or through strategic partnerships, will not only open up these new markets but help protect existing ones.


External forces will continue to affect the aerospace industry, providing both challenges and opportunities. Success in the long run will depend on how players address these priorities and the strategic actions they take today.

The best video calling apps for Android phones

$
0
0
Video calling takes phone calls to the next level, by allowing you to see the person you’re calling as well as hear them. Many video apps allow you to make phone calls for free, saving you money, especially if you’re calling someone internationally. There’s no limit to what video calls can do for you. You can catch up with your family if they live far away. You can have your friend come shopping with you, even if they couldn’t physically be there. Once you’ve experienced video calling, you’ll wonder how you managed without it.

This guide gives a brief overview of the top 6 video calling apps. Read on to learn about the various features of each app to decide which one is the right one for you.

Google Duo
google duo
Google Duo is free to download and use and allows you to make video and voice calls to an individual. It encrypts all calls and includes the Knock Knock feature, which lets you see the person calling you before you answer so you can decide whether you want to take the call. It’s incredibly easy to use – after downloading, it will send a text to verify your number, whch it will auto-fill before taking you to the app. To use Google Duo, simply tap on the ‘New Call’ icon to see the list of people with Google Duo installed. If someone you want to call doesn’t have it installed, you can invite them to use the app. Once they’ve downloaded it, you can call them. Once you’re in a call, you can change the audio source, change whether you use front or rear-facing cameras, mute the call, and view a thumbnail of what your contact is seeing. It is incredibly simple to use, although it doesn’t currently offer the ability to make group calls.

Facebook Messenger
facebook messenger
Another free app, Facebook Messenger is a must-have if you loveFacebook and want to get the most out of your Facebook account on your phone, although you don’t need a Facebook account to use it. The Facebook Messenger app allows you to make high quality HD audio calls, as well as video calls, all for free as long as you use wifi. Facebook Messenger also gives you access to the full range of functions for your Facebook account, including the ability to read your messages, send stickers and voicemail messages, start group chats and even send money.

WhatsApp
whatsapp image
WhatsApp is a communication app that allows you to easily organise and send messages, as well as share videos and picture. Now it also offers the facility to make video calls. Unlike some other video calling apps, it doesn’t matter what type of device a contact has. As long as they have WhatsApp installed, you can make a video call to them. All you have to do is go to the call tab and then tap on the call button to start a call.

Skype
skype image
Skype is probably the best known video messaging app and comes with all the features you’d expect from a leading app. As well as one-to-one video calls, you can also make group video calls including up to 25 people. You can send video and voice messages, text, instant message and more. If you have an Outlook account, you can start a video call from an email, which is very convenient if you need to get clarification on a specific point. Skype will even translate a call for you if you need to talk to someone who speaks a different language. Withall the competition around for video calls, Skype is still striving to be at the cutting edge of video calling.

Viber
viber
Viber allows you to make free video calls, group chats and large group public discussions of up to 100 participants, as well as phone calls and send texts, images and stickers. Anyone can follow a public chat, although you can only participate if you’ve been invited into the chat. Unlike many other apps, Viber doesn’t require you to set up a username and password. All you have to do is input your phone number to get started. You’ll need to register using an access code or callback number the first time you use the app. You’ll then see a Viber icon next to your contact list so you can use Viber to make free calls to others with the app instead of your phone’s network. Viber also allows you to send short voice messages during an IM chat if you want to say something instead of taking the time to type it. Viber is especially good for those who want to share information with a large group of people quickly.

JusTalk
JusTalk
If you find all the features of other video chat apps confusing, then JusTalk is a great alternative. Simple to set up and use, although it doesn’t offer the same array of features as the other apps in this guide, if all you want to do is make high quality, free video calls, JusTalk is all you need. You can add contacts from your phone or Facebook account and JusTalk will tell you who is available for calls – although they’ll need to have JusTalk installed for you to be able to use the app with them. Although you can’t share documents with the app, you can still send images from your phone’s memory and, probably the most fun part, doodle and scribble over the screen as you’re chatting and then send your doodles to the other person to start a new call. It also has a Night Vision feature to make it easier to see a caller in poor lighting and allows you to record parts of your chat to replay later. Although it might not have the most features, it’s ideal for those who just want to be able to make video calls without any hassle.

Now that you’ve got an overview of some of the most popular video call apps, you can choose which one(s) to download and start using. Maybe you might want to install more than one and see which app you like the best.

Once you’ve started video calling, you might like to experiment with other ways of communicating. Try setting up an email account on your phone so that you can keep track of your mails wherever you are or explore more of the messaging features on WhatsApp.

Digital Handpan V2: The Touch Drum

$
0
0
Awhile back, I introduced a prototype for a concept instrument I had been working on: a capacitive touch digital handpan. Since then, I’ve taken the project to the next level by turning it into an actual instrument that can be played by itself or with other instruments!
alt text
After searching high and low for the perfect enclosure, I came upon an old head of lettuce container at a local thrift shop. It ended up being the perfect size and shape for what I wanted.

I started by cutting out circles of copper tape and adhering them to the top portion of the container. Tiny holes were drilled to pass through small gauge wire and solder it to the tape. I attached the pads to the prototype to ensure that they would still behave as expected. After a successful test, I went to work adding the rest of the interface to the top half, including the buttons, potentiometers and the Teensy View screen.
inside top
Note: I added a ninth pad but ended up not using it in the final design.

The OLED screen displays useful information, including the current musical scale and the key (or tonic) of the scale, volume, decay, the current octave, and the current battery percentage and voltage.
Teensy View OLED Screen
The bottom of the container was cut out to allow a speaker to sit nicely inside the enclosure and produce sound out the bottom of the drum.
speaker
Inside the drum, a snappable protoboard was used to contain all the electronics, which sit on a plastic platform on top of the speaker. At the center of it all is a Teensy 3.2 and a Teensy Audio Shield. A SparkFun LiPo Fuel Gauge is used to provide the battery statuses, and a 5V/1A Charger/Booster to handle charging and boost the battery voltage to 5V to power the audio circuitry.
alt text
Fellow SparkFun Engineer Marshall, who helped tremendously with this project, persuaded me to try out his technique of using a motor driver as an amplifier. It worked surprisingly well, so I left it in this design. However, using a Mono Audio Amp Breakout would work just as well and is likely more user-friendly.
alt text
On the back is the remainder of the user interface. There is ¼" jack to provide line-out audio and a switch to switch between line out and the internal speaker. The red switch controls power, and the barrel jack above allows 5V in to charge the LiPo.
back
Last, to give the drum a little more functionality, I added one more ¼' jack on the back that connects to switches on a foot switch to two digital pins on the Teensy. These switch changes increment or decrement the current octave. I wanted to be able to use the foot switch with professional audio equipment, such as the BOSS RC-30 Loop Pedal, so it was modeled after the BOSS FS-6 Dual Footswitch. Most professional audio foot switches, including the FS-6, use a normally closed (NC) switch vs. the normally open (NO) switch found more commonly in embedded electronics. I found some NC switches from this Guitar Pedal Parts site. Note that a stereo cable must be used to utilize both of the switches. An additional 3.5mm jack was added just as a secondary cable option.
alt text
The switches and jacks are housed within a SparkFun Aluminum Enclosure, which provides a very robust enclosure.
alt text
To see the Touch Drum in action, check out the video below.

Materials


A majority of the parts used to build the Touch Drum can be found in the wish list below. Additional supplies needed include an enclosure to house your drum and a speaker.

Tube Amps in the Age of Bluetooth Speakers

$
0
0
Full disclosure: this is part confession, part rant and part technical breakdown of obsolete tech. And it’s probably going to be long. You were warned…

The Story So Far…

First, some history. When I was growing up, we had a Grundig Majestic (links to which are sketchy and few) console radio/turntable that I sorta unofficially inherited for my room. And I fell in love with this thing. It had the best sound of anything I was hearing, and it sounded distinctly different. And that’s because it used tubes as the active elements.
alt text
My baby’s seen better days, but she’s all there.

Fast-forward to about 10 years ago. I thought it would be a fun side gig to design and produce tube amps for money. So I created my first design, just to get my feet wet, with an EL84ultra linear output stage, 6922 concertina and a 12AX7 input/bass/treble section. It was a great success for me, and I dressed that sucker up with LEDs and a bunch of other features. And once that was done, I started on the next design with an EL34 output section, and I got some (6) output transformers custom-built by a gentleman from the Seattle area who was wise in the ways of such things.

Then my partner and I went to the Rocky Mountain Audio Fest. I was really happy with my work thus far, but I needed to immerse myself more in the ways of the competition. But in attending, I came face-to-face with the sort of bald-faced lying that happens in the audiophile gear genre — $100K+ for a pair of speakers. Little sawhorses propping up a power cable a few inches off the floor because, you know, we don’t want to get any floor-induced parasitics into the audio. All justified with lines like, “Achieve the truest representation of what the artist intended.” Are you kidding me? Listen, I’ve been on the other side of this equation before, and you’re giving this imaginary “artist” person way too much credit. And these sawhorses…do you know what happens to your precious line voltage on the other side of that outlet? But oh, the clarity! Right. Miles and miles of unshielded cable and substations, and you think you’ve improved your sound by suspending the power cable 6 inches above your floor for the 4 feet between the wall outlet and your gear? Pardon me for saying, but I think that if you put any stock in that tripe, you probably deserve to pay $4K for RCA patch cables.

But whatever, right? People have to have their hobbies, and I’ve spent tons of dough on mine. It so happens that for this hobby, the exorbitant prices to participate are less about the sound of the gear than the identity it buys the purchaser. That’s fine, and not entirely uncommon. But I’m not going to be the one to tell you stories to extract as much of your money as possible. I just can’t be that guy. Bad businessman Pete, I guess. The quandary brought the project to a halt.

Then some life happened. Kids and all that goes with them, and various other extreme draws on my time and energy. But about five-ish years ago, I decided that I couldn’t let those output transformers I bought sit on the shelf forever. I had to finish the design and build the two EL34 monoblocks I had planned years before. It’s been a long haul, but this past Xmas I was determined to finish these two amps. Why has it taken so long? Lots of reasons, top of the list being the kids. And it doesn’t help that I’ve got the attention span of a gnat. But also, Bluetooth® speakers are just so convenient! And I also started at some point to buy digital music. Phone/tablet/computer + Bluetooth speaker + MP3s = a really easy and portable soundtrack to your life. Does that combination sound as good as a tube amp and a pair of nice B&W speakers? It sure does NOT. But it’s just convenient enough to keep my amps terminally on the back burner. Until this last Xmas.
alt text
Now, I have to admit that I didn’t technically meet the challenge. I did not finish both amps. But I did effectively finish one — all the bugs worked out, passing audio and sounding really, really good. The second one should just be turning the crank. Heh. Yeah.

The Design
But enough of that. What are we actually looking at? It’s largely the monoblock design I talked about above, and the two of them are book-ended. 12AX7 input/bass/treble section, 6922 concertina and an EL34 ultra-linear output section. The HT supply is regulated 350V, and half of the LT supply is DC-regulated 6.3V. Schematics graciously provided by my notebook. This is all point-to-point wiring, so an electronic form of the schematics doesn’t currently exist. They may be a little sketchy for the casual observer, so lemme know of you’ve got questions.

I’ll start with the LT supply (low voltage, high current for heaters) because it’s the section that’s given me the most grief to get right. First, keeping 60Hz out of the audio is paramount, which is why I went for regulated supplies. But with 6.3VAC RMS (8.9V peak) out of its low voltage winds, the power transformer I chose didn’t give me much headroom to pull that off for the 3.6 amps the tubes were going to need for their heaters. I swear I did all these calculations back in the day that would allow this purchase…but it appeared I had shortcut myself into a corner. To illustrate, 8.9V peak - 2 diode drops (1.2V) - regulator dropout (negotiable, but call it a very generous 1V) - 6.3V output = 0.4V allowable ripple voltage. More than that and the regulator fails to regulate. I’ll spare you the equation, but that takes many tens of thousands of uF to accommodate. And while that’s a possible solution, it’s hardly an elegant one. So after many iterations, I ended up with this circuit.
alt text
As it turns out, I can effectively ignore 60Hz noise coming from the EL34 heaters because of the push-pull output topology: since the output transformer translates changes in current and the heaters are impacted in the same way by the same noise (as in common mode), any current fluctuations will be equal and opposing, and 60Hz at the output will not result. So I can basically ignore 3 amps for regulating, but I still have to separate the sections with a Schottky diode so that my filter cap for the 12AX7 and 6922 sections doesn’t go to feeding the EL34s. For those, I use just enough capacitance to get the average voltage into a workable range (4,000uF and about 6.4V). For the remaining 600mA, I use a low dropout regulator (LM1084) good to 5 amps, which I’ve made adjustable with a 10-turn trimpot. And using a mere 10,000uF, I can dial out the last of the ripple giving me about 6.1V. The rectifier that feeds both of these sections is also comprised of Schottky diodes good to 8A continuous and 0.2V forward drop (STPS8L30).

Next is the HT supply, which is regulated 350V at 200mA. I had some experience with this topology from my first design, so this one really didn’t present me with any surprises.
alt text
I lifted this from a Morgan Jones book, but I think it’s an LT reference design…somebody’s reference design, anyway. I’ll leave it as an exercise for the reader to suss its operation. There’s still a bunch of amp to get to, and this bit isn’t all that interesting. But yes, that’s an LM317 in the middle.
Now let’s talk about audio. My approach here was a little different from what might be expected, for better or worse. I consider it experimental, but I’m really happy with the results so far. EL34s are relatively high power (from my perspective), but I honestly don’t need all that much power. So I’ve traded gain for bandwidth and eliminated the need for global feedback. Or so I think for now — no promises. But in a lot of places where you might think there should be a cap across a cathode resistor, there won’t be one. And the BW versus gain trade-off is why.
The input section is as follows:
alt text
I may have lifted this circuit from the same aforementioned Morgan Jones book, but I think I got it off of the DIYAudio forums. If you’re a tube guy/girl, you may recognize it. I’ve spent enough time looking at the circuit to figure out that bass goes to the top of that box and treble goes to the bottom (by evaluating at arbitrarily high and low frequencies). But I’ve largely just accepted these values without recalculating. I’ve built and tested a few, and performance is good, giving a general gain of about 1.

From there we go to a phase-splitting section in a concertina configuration with a 6922 tube, and on to the EL34 output section.
alt text
I’ll keep the description high-level. The first stage of the 6922 sets the bias point for the second stage, which gives us equal and complementary signals on the cathode and the plate of the second stage. Those signals drive the matched pair of EL34s, each of which are set with quiescent plate current of 60mA. I don’t have a bias adjustment installed (maybe later?), so the tubes have to be matched.

Logistics
alt text
The layout of the chassis is nothing earth-shaking. Power (left) and audio (right) transformers are on opposite sides and orthogonal so as not to interfere with each other, tubes are in the middle, power supplies are to the back where I have access to a heat sink, and controls are to the front. As I mentioned earlier, the two monoblocks are book-ended, so everything you see here is swapped left-to-right on the unfinished one. The funny-looking thing that’s stood-off from the heat sink is my heater rectifier. I accidentally ordered the wrong package for those Schottky diodes (thought I was getting TO-220, ended up with DPAK), so I had to get clever with my heat dissipation. It works well enough for now, but I really need to get those things on the heat sink proper — probably all four diodes on a single PCB flush-mounted to the heat sink.

The chassis itself is comprised of many pieces of metal bolted together. The difficulty with that is that I’ll need to periodically crank all the screws down to make sure everything is well-grounded. Also, the choice of using the green acrylic for a faceplate necessitates running a ground wire to each of the pot cases; otherwise you get a big “pop” in the output every time you touch one.
alt text
All the wiring is point-to-point. I like to keep my high-Z lines as short as I can, so components are mounted on the tube sockets. Morgan Jones would describe these examples as “howlers,” but I’m pretty careful keeping things secure and insulated because high voltage scares the bejesus outta me. If this were for production, I might do it differently.

Testing
If you read through my chicken-scratch schematics, you’ll see some notes and numbers indicating test measurements (DC bias points, really). But primarily, the difficulties I’ve run into are:
  • LT supply. How many times did I have to go around that block before I was happy? I lost count.
  • Missing grid resistor on the first stage of the 6922 messed up my bias points for both stages.
  • Plate current for the EL34s came in a little lower than expected at 55mA. It brings the bias point a little closer to a less active region, but it doesn’t seem to be a problem for the sound. I’ve also got 400 ohm resistors I can swap out for the 470s that are there now, and I’ve got plenty of headroom from my HT supply. But I won’t throw the power away if I don’t have to, in spite of all that junk I said about trading gain for BW.
  • 60Hz everywhere. Everything’s got to be well-grounded, and all references must be shared. Don’t try to get clever with this; it will only make you cry.
Once all the 60Hz was gone from the output, the audio test yielded nothing displeasing. Is it a flat response? Well, I can make it flat with the use of the bass and treble controls, but the point of those is to make it sound how I like, and I don’t like a flat response. I tend toward the “smiley-face,” somewhat scooped mids, and I can dial that in without any trouble. And the amp has just enough non-linearities to sound like a tube amp. My nostalgia bone is well-satisfied.

I haven’t tested for power output. Well…I was listening to it in my basement the other night, and my wife later accused me of “rocking out down there” with something of a sneer in her voice. I’d call that a successful test.

Changes? Additions?
Besides the things I’ve already mentioned (updated heat sinking for the LT rectifier, bias adjustment for the EL34s)…
  • LEDs! Not addressable, flashy junk; that’s just gauche. But on my first design I had red LEDs illuminating the underside of the tubes to indicate low voltage presence. Then blue LEDs would illuminate the tubes when you had the HT turned on. I’ll probably do something similar here.
  • Bias monitoring at all stages. Also a thing I had set up on the last one, and it always kept me informed as to what the circuit conditions were, and would switch off the HT if something were dramatically out of whack.
  • Relay connection of the speakers to the output transformer. Again, had it in the first design; haven’t implemented here. There’s a big “pop” in the speaker when the HT gets turned on that I should make go away. Switching on the speakers after that power-up would fix it.
But these things are down the road a bit. For now, I’ll just be happy to have functioning amps. Finish the second one in 2–3 weeks? Dunno, man, I’ve also got a couple of new drone frames about to hit my doorstep, and it’s hard for the gnat to focus.
References

I’ve mentioned the name Morgan Jones a couple of times. If you’re interested in such esoterica as vacuum tubes, I’d highly recommend checking out his books.

Review: Alienware’s first wireless headset gets (almost) everything right

$
0
0


Alienware dipped its little green toes into the wireless headset market for the first time this year with impressive results. The AW 988 is a premium device that delivers on its promises, but there’s some room for improvement.

The AW 988 leaps onto the scene with 7.1 surround sound, decent drivers, and enough clever design decisions to merit it a place towards the head of the pack. Before we get too far, however, let’s talk about this thing’s nuts and bolts.
  • Drivers: 2 X 40mm
  • Connectivity: 2.4 GHz wireless, 3.5mm audio jack
  • Frequency response: 20 Hz to 20 KHz
  • SPL: 107 db (+/- 3)
  • Battery life: 8.7 hours (Alienware FX on), 15 hours (Alienware FX off)
  • Microphone: unidirectional, noise-cancelling
I’m pretty ambiguous about the aesthetics of this headset. On the one hand I love the way it feels in my hands and on my head. The soft-touch material, premium foam padding and tight construction make it feel like something a fighter pilot would wear. But, on the other, it’s black and boring and takes almost no chances.
It’s incredibly well-constructed. The microphone tucks neatly away, out of sight and out of mind, when I’m feeling anti-social. I particularly liked that stowing it automatically muted it. The metal headband slides without friction and it never once snagged my hair — even when I tried to get it to.
The headband is wide and comfortable with a padded soft-touch covering. The ear cups rotate 90 degrees and fit over my ears – most over-the-ear cans bend the tops of my ears slightly, these didn’t. Aside from it being a tad heavy – 0.38kg (0.84 lb) – this headset feels as good as any I’ve used.

Other nifty design choices include a place to store the USB dongle inside the headset itself – a feature that gamers who travel a lot are sure to appreciate – and two-zone customizable LED lighting.
My only quibble with the design was the inclusion of tiny buttons and dials that are practically impossible to use without taking the headset off, at least until you’ve memorized their layout. It’s not a big deal once you get used to it, but it seems like Alienware could have at least made the buttons stick out or something.
It’s a solid headset that feels like it can handle the dangerous life a professional gamer’s accessories have to endure. But looks aren’t the top consideration when it comes to audio gear.

This headset sounds amazing as long as you’re gaming. I tested it, on PC, on a variety of games including Resident Evil 7, Battletech, and Battlefront II. While playing the latter, a shooter, I was particularly impressed with the headset’s sound stage.

I could locate where lasers were coming from with my eyes closed, thanks to the high-fidelity nuances provided by the 7.1 surround sound experience. And the roar of a Tie-fighter or X-wing strafing overhead was jaw-dropping.

And the PC experience is made even better by the Alienware control software. I griped a bit about the company’s bloated Command software when I reviewed the Alienware Elite Gaming Mouse, but the headset has its own dedicated program. And it’s great.

You can change EQ settings between movies, music, and gaming – and the gaming settings break down to specific genres such as racing and shooter. Settings for lighting effects, noise-cancelling for the mic, and individual band EQ are also in the software. But my favorite feature was a cool tool called Audio Recon.

It basically gives you SONAR for FPS games by providing a visual indicator of the direction the strongest source of sound is coming from. After a few minutes I found myself glancing at it as just another part of my routine, and it actually seemed useful in Battlefront II. I didn’t get the chance to test with more frantic shooters during my review.

It’s clear Alienware put everything it had into crafting this headset with FPS gamers in mind. And I can honestly say my K/D ratio seemed a bit higher while wearing it, but take that with a grain of salt because my abysmal scores have nowhere to go but up anyway.

I also tested the headset on an Xbox One S with Dolby Atmos. And it sounded even better. I fired up Forza Horizon 4, NBA 2K19, Fortnite, and Overwatch– each one sounded amazing. Despite the fact the AW988 doesn’t have an external amp or huge internal drivers, it still manages to reproduce top-notch audio with surgical precision.

Alienware didn’t jam these things full of low-end to compensate for a lack of volume. And that means the sound fidelity isn’t propped up by reverb or bass-bleed. During gaming this is a revelation. For example, both individual voices and full crowds sound real — really real. With my soundbar, player interviews and crowd noise in NBA 2K19 sound as though they were previously recorded and are being played back on an iPhone speaker held up to a microphone. But wearing the AW988 is so immersive you can close your eyes and try to pick out an individual fan screaming from the top row.
Unfortunately, my love affair with the headset’s audio fidelity begins and ends with gaming. It’s not going to cut it as a music lover’s daily driver. The sound stage is too big for anything other than classical or electronica, and it’s too quiet.

It’ll get loud enough to give you ear-fatigue, don’t get me wrong, but if you’re trying to crank it to “11” you’ll have to be picky about the music you listen to. Songs with bass-heavy mixes, like Jidenna’s “Long Live The Chief” or Kids See Ghosts’ “Feel The Love” sound clean – if a bit too crisp. But older songs like Ray Charles’ “What’d I Say” and Creedence Clearwater Revival’s “Have You Ever Seen The Rain” sound tinny and a bit too far away. It feels like most music is swallowed up by the AW 988.

That being said, I don’t think you’re going to find a better sounding 7.1 wireless headset for gaming. It’s not just the sum of its parts that make the AW 988 a killer accessory – if you want to spend a bit more, you’ll get better options like dual batteries or higher volume – but the AW 988 is tuned incredibly well for gaming, I’ve never heard its equal.

At 200 bones the AW 988 is a bit pricey, but after spending a few weeks with it I’d recommend it over anything I’ve tried at the same price or lower. If you’re a serious gamer who doesn’t already have a favorite headset, I suggest giving this one a shot.

How to Build a Floating Bridge in 12 Minutes

$
0
0


As the County Fire tore through Northern California this summer well on its way to burning 90,000 acres in Napa and Yolo Counties, Harley Ramirez got a call. The 132nd Multirole Bridge Company of the California National Guard was heading toward the blaze, and Sergeant First Class Ramirez was being put in charge. His team’s mission: help the California Department of Forestry and Fire
Protection, better known as Cal Fire, get its people, equipment, and supplies to the front line by building a floating bridge across a river in Cache Creek Regional Park. As quickly as possible.
Before long, Ramirez was watching a truck called a common bridge transport slide a folded up pile of metal into the river, where it unfurled itself in two splashy steps, a piece of origami reverting to its original state. The suddenly flat slab of aluminum floated on the surface, held in place by ropes gripped fiercely by Ramirez’s soldiers. This was the first piece of the improved ribbon bridge the 132nd had come to here to build, a Lego-like thing that would save California’s firefighters vital time in their efforts to contain the County Fire, and provide them an escape route should they have to fall back.

The 132nd is just one of many military units around the country and planet trained to install this sort of floating, temporary bridge, meant to last a few weeks and move supplies and people when war or natural disaster nix standard engineering solutions. This variety of bridge—designed by General Dynamics European Land Systems—has bridged the Tigris and Euphrates Rivers as American soldiers invaded Iraq in 2003, helped workers get cranes and oil booms into the Gulf of Mexico to contain the 2010 Deepwater Horizon oil spill, and stretched across Poland’s Vistula River during NATO’s training exercise, Exercise Anakonda, in 2016. And as Hurricane Florence continues to flood the mid-Atlantic, they’re being readied to move emergency personnel and relief supplies into the river-surrounded town of Georgetown, South Carolina.

Any improved ribbon bridge is made up of two types of bays, essentially big floating rectangles. The ramp bays, which have one sloped side, will connect to each shore. The interior bays go in between them; their number depends on how long a bridge you’re making. Each is 22 feet long, weighs about 13,000 pounds, is made of aluminum, and floats the same way a pontoon does. Usually, the crew launches one of the ramps into the water first, using that common bridge transport, which backs up to the water and uses a crane arm to slide the payload off its flatbed. They usually angle this first piece of equipment upstream, so it’s out of the way as they drop the other bays drop into the water.

For easier transport on land, each bay is folded up like a ‘W’. Once on the water, it unfurls with a splash, and crew members in bridge erection boats—essentially high performance tugs—nudge it into position. Once two bays are lined up, the soldiers dash over to lock them in place. They start by using supersized hex wrenches to drop heavy duty pins, 2 inches round, into a set of interlocking loops. Then they deploy the “dog bones,” the spring-loaded dumbbell-shaped locks that span the two bays, fitting into a special groove. Meanwhile, the landlubbing crew members use cables to anchor the ramps, usually sinking them into the ground or tying them to trees. And that’s about it.

Uninstalling an improved ribbon bridge is about a simple as setting it up. Unlock the bays, push them to shore, and use that crane-wielding boat to winch them back to dry land, in the process folding them back into that W-shape. Then put them away until the next time someone needs a bridge over troubling waters.

“It’s a hasty way of making a bridge,” says First Lieutenant Colin Francis, formerly of the 132nd, who took part in the Cache Creek build. An effective one, too. An improved ribbon bridge can support 70 tons or more—enough to carry an M1 Abrams tank—and is solid enough that you’ve got to drive across in a semi-truck to feel it move in the water.

In ideal conditions, a trained crew can build a 100-foot bridge (that’s with two ramp bays and three interior bays) in about 12 minutes. “Ideal” here means calm water that’s at least two feet deep, with a shallow bank and plenty of room to maneuver. But war zones and natural disaster areas are not what you’d call ideal conditions, so the 132nd Multirole Bridge Company trains for as many situations as possible. Not that they can prepare for everything.

Even before dropping that first bay into the river, Ramirez knew his team had a problem. They were working around an existing bridge (built in 1930 and no longer rated to support the heavy duty equipment Cal Fire needed to move), and didn’t have the space to deploy the bridge erection boats that maneuver the bays into place. The crew had started by hanging onto the bay with ropes, but in an unhelpfully swift current, they were losing the tug of war.

Then they saw the bulldozer Cal Fire had brought to the crossing, and changed the plan. The soldiers tied the ropes onto the machine and stepped back. Then the contractor who’d come up to drive the thing hopped in the cab and carefully moved forward and back, easing the bay—and the next four—into just the right spot. No bridge erection boats, no problem.

“We have no training to bulldoze a bridge into place,” Francis says. But they are trained for improvisation and flexibility—to do whatever it takes to build what needs building. The unusually complex build took them just a few hours. Hardly a record for the 132nd, but fast enough to let CalFire get its equipment and manpower where they had to be that very night.


Acer Predator Triton 700 laptop review: The closest thing to a high-end gaming ultrabook

$
0
0

t’s always been hard to reconcile the dreams of portability with power in a gaming laptop. Laptops can be powerful and massive or wimpy and slim.
A slim laptop will not be thermally efficient, leading to a drop in performance, and a thick laptop isn’t portable.

A good gaming laptop is one that can handle a sustained thermal load, i.e. long gaming sessions, but such a laptop is not one that you’ll normally want to carry around with you wherever you go.
Attempting to reconcile these two ideals is NVIDIA with its ‘Max-Q Design’. NVIDIA makes some of the best graphics cards (GPUs) on the planet, and GPUs are easily the hottest components on a gaming laptop, both literally and figuratively.

Normally, all GPU makers work to extract the maximum possible performance from their designs. With Max-Q, instead of maximum performance, the focus is on maximum efficiency. In the case of a high-end GPU, that ‘Max-Q’ point of highest efficiency could be at about 85 percent of its performance. Beyond this point, temperatures and power draw can scale exponentially.
Essentially, a Max-Q GPU is slower than a regular GPU, but only to an extent that will have little to no perceivable impact on most games. In exchange, you get a lower power draw (i.e. longer play time) and lower thermal load (i.e. the laptop won’t need to be so beefy).

It’s an exciting idea on paper, but how does it hold up in real-world use?
Among the handful of laptops in the world to support Max-Q is the Acer Predator Triton 700, and it’s with this device that we attempt to answer that question.

Build and Design: 5.5/10
The Triton 700 is the closest thing to a high-end gaming ultrabook
The Triton 700 is the closest thing to a high-end gaming ultrabook. Image: tech2/Anirudh Regidi
For a laptop that costs over Rs 3 lakh, the Triton 700 is surprisingly flimsy. It’s made from the thinnest of metals and bendy plastic. I was a little surprised at this, but then again, I had to remind myself that the overriding priority with this laptop was low-weight and thickness. It is a mere 18.9 mm thick, after all. At around 2.4 kg, it’s also among the lightest 15.6-inch devices around.
This is especially impressive when you consider that you’re looking at one of the most powerful gaming laptops in the market today.

Open the lid and you’ll be struck by the layout of the keyboard and trackpad. The keyboard and trackpad have swapped places, with the trackpad doubling as a window into the internals of the laptop.

It’s a cool design, but one distinctly lacking in functionality. Moving the keyboard to the front means that there’s now no palm rest.

Ergonomically, this laptop is an absolute pain to use. The forward placement of the keyboard makes the laptop uncomfortable to use on your lap or on small tables, and the placement of the trackpad just makes it awkward to use without a mouse.

Ports are present all over the device, even on the rear.

Keyboard and Trackpad: 4.5/10
The kindest thing I can say about the Triton’s keyboard is that it’s different. These are mechanical keys, after a fashion, and make an audible click when depressed. They clearly don’t have the tactile, linear response of a true mechanical keyboard and they also don’t feel as good as those you’d find on a MacBook or ThinkPad.
The clicky feedback on the keys is more gimmicky than useful. Image: tech2/Anirudh Regidi
The clicky feedback on the keys is more gimmicky than useful. Image: tech2/Anirudh Regidi

What’s frustrating about these keys is that they click for the sake of clicking. A mechanical Cherry MX key “clicks” when the keystroke is registered. The Triton 700’s keys just click the moment you press them, before the keystroke is registered, undermining the whole point of audible feedback.
If anything, the trackpad is worse than the keyboard. Despite feedback issues, the keyboard actually works and can be fun to type on. The trackpad, on the other hand, fails at everything it does.
First, the glass trackpad is supposed to be a window into the internals of the laptop. All you really see when you take a peek, though, is a blurred view of some copper heat pipes and the faint glow of the LEDs around one of the cooling fans.

Second, the trackpad is meant to, well, track. Unfortunately, it doesn’t track very well and at least with my fingers, it didn’t even detect clicks properly. This is all the more frustrating when you realise that there’s no right-click button. You have to use the double-tap-to-right-click gesture, and it doesn’t always work. Swipe gestures also don’t respond well for the same reason.
That glass window looks cool, but it ruins the trackpad. Image: tech2/Anirudh Regidi
That glass window looks cool, but it ruins the trackpad. Image: tech2/Anirudh Regidi

Thankfully, this being a gaming laptop, you’re unlikely to ever use the device without a mouse attached anyway. Woe be you if you don’t, however.

I can understand why Acer choose this odd layout with the keyboard in front, ASUS choose to do the same, after all, but why couldn’t Acer have gone for a nice keyboard and instead of a gimmicky one?
I also don’t think that I would have minded a slightly thicker laptop if it came with a more traditional layout.

Features: 9/10
Powered by an Intel Core i7-7700HQ CPU, 16 GB of RAM and an NVIDIA Max-Q design GTX 1080 graphics card, this laptop is no slouch. The display is a 15.6-inch 120 Hz FHD panel (1920x1080 pixels) that’s G-Sync compatible, and you get 1 TB of storage, which is supposed to be two drives in RAID 0 for enhanced performance.
On the connectivity front, you won't be left wanting. Image: tech2/Anirudh Regidi
On the connectivity front, you won't be left wanting. Image: tech2/Anirudh Regidi

This configuration defers from the international variant in that the default option offers 32 GB of RAM and 512 GB of storage. I suppose Acer felt that 1 TB of storage would be better than 32 GB of RAM. I can’t say I disagree with them on that.

The keyboard uses “mechanical” keys with RGB backlighting (more on that later) and the glass trackpad is protected by Corning Gorilla Glass.

In terms of connectivity, you’re well covered. Options include 4x USB-A ports, two of which are USB 3.0, 1x USB-C port that’s Thunderbolt 3 compatible, and it also supports DisplayPort and HDMI, both of which are on the back.

Display: 8/10
You’ll be forgiven for expecting a best-in-class display on a laptop retailing for Rs 3.3 lakh, a display that comes with the works: 4K UHD, HDR, 240 Hz, or at least wide-gamut support.
The 120 Hz, G-Sync ready FHD isn't the sharpest one around, but it's perfect for a gamer. Image: tech2/Anirudh Regidi
The 120 Hz, G-Sync ready FHD isn't the sharpest one around, but it's perfect for a gamer. Image: tech2/Anirudh Regidi
Instead, you get a slightly bluish 15.6-inch display that’s running at 1080p (FHD) and supports G-Sync at 120 Hz.

Wait. What? G-Sync? 120 Hz? Woohoo!
For a gamer, those words represent what is close to the holy grail of gaming displays. A 120 Hz refresh rate and G-Sync support means that the included uber-powerful GPU can flex its muscles while you enjoy a steady, stutter-free gaming experience.

Normal computer displays are 60 Hz units. At this frame-rate, some amount of motion blur and input lag is apparent, making the gaming experience less competitive and more “cinematic” one. Normal displays also only work best at when the on-screen data is being refreshed at a steady pace, usually at 30 or 60 Hz. G-Sync ensures that regardless of the refresh rate of the data, the visuals on the screen appear clean.

Cheaper laptops do sport G-Sync displays, however, so Acer’s display isn’t unique or special in that regard.
84 percent sRGB coverage and 233 nits of brightness isn't bad at all.
84 percent sRGB coverage and 233 nits of brightness is quite nice.

In terms of colour quality, the laptop topped out at 84 percent sRGB and a contrast ratio of 817:1. That’s not bad, especially for a 120 Hz, display, but ASUS and MSI have both shown that it’s possible to fit more accurate, brighter displays with G-Sync support.

Performance: 9/10
The performance of this laptop almost makes up for its shortcomings in the keyboard and monitor department. Almost.

As mentioned earlier, the Max-Q design GTX 1080 GPU in this device offers 80-90 percent of the performance of a mobile-ready GTX 1080, which should also make it faster than a mobile-ready GTX 1070.


Considering that we’ve tested laptops that cover the entire spectrum of mobile GPUs, from the 940 MX to a mobile-ready GTX 1080, it’s evident from our testing that the performance is indeed in line with our expectations.

In all our tests, the laptop’s performance sat comfortably between that of the monster that is the MSI GT73VR 7RF Titan and the MSI GS73VR 7RF Raider. Both of which are behemoths that would put a desktop gaming PC to shame.

Overall, the performance is very satisfactory. Despite its size, internal CPU and GPU temperatures rarely crossed the 80 degrees mark and there was no evidence of the CPU throttling under load. The only issue I noted was that the base of the laptop tends to get very hot, 51 degrees C. This makes the laptop nearly impossible to use on your lap when gaming. Unless you don’t mind blistered thighs, that is.

We saw frame-rates of over 120 fps in Rise of the Tomb Raider, topped out at 200 fps in Doom and saw a phenomenal 168,152 in GeekBench 4’s compute benchmark. Video conversion time was also faster than anything we’ve tested so far, clearly hinting at the laptop’s incredible thermal performance.
The read speeds are excellent, but not best in class.
The read speeds are excellent, but not best in class.
Hard disk performance was excellent, but not superlative. Read and write speeds were measured at 3,040 MB/s and 1,712 MB/s, but it must be noted that we’ve seen speeds in excess of 3,300 MB/s (read and write) on several laptops. Not that we’re complaining. 1,712 MB/s isn’t exactly slow.

Battery Life: 7.5/10
As with all high-performance gaming laptops, expecting good battery life is unreasonable. Despite that, the laptop made a pretty good showing for itself, and in fact, offered the best battery life while gaming that we’ve seen in this category.

Most laptops die within 30-40 minutes when playing games. The Triton 700 managed a little over an hour while playing Doom.

Our standard PCMark 8 battery test indicated a battery life of a little under 2 hours, which is decent in this class of device. In general use, which I would definitely do not recommend, the laptop gave me about 2.5 hrs of usage, enough for a movie.

Verdict and Price in India
As a proof of concept, the Acer Predator Triton 700 has me very excited for the future of gaming. A slim, slick laptop with the power to show a gaming PC who’s the boss was only a pipe dream till this device came along.


That said, I cannot recommend this laptop to anyone, even a gamer. What it offers in performance and form factor, the laptop takes away in ergonomics. It's also ludicrously overpriced when you look at its US counterpart.

The Triton 700 great test bed, a product that’s still in beta, a showcase for exciting new gaming technology. Version 2.0 of the Triton, however, is what I’m really looking forward to.

Here are the cases you should get for your new phone

$
0
0

The best cases by type, material, and level of protection.
The best cases by type, material, and level of protection.
The onset of fall means a new line of iPhones, Samsung devices, and other smartphones. It also means once again I'm spending days trying to decide which smartphone cases are the best.

Because phone cases come from so many brands and in so many shapes, colors, and materials, there's no one perfect case for everyone. But below I've highlighted some of the most promising on the market. Some are stylish, some are ultra protective, and many are well-rounded, all-purpose cases. This list will be updated with picks for the Google Pixel 3 once it’s released. This list will also be updated with battery cases; we've got to test them first.

If you aren’t upgrading, or chose to snag one of last years models for cheaper, here’s our list for last years cases.
Skin cases
Skins
They're soft, thin, and flexible.
Amazon
To keep the look and feel of your device, go for a skin-style case. These are flexible, thin, and usually made from rubber or silicone. They're not disaster-proof, but nonetheless an effective first line of defense. Oh, and they don’t cover the front of your phone, so make sure to pick up a screen protector.
  • Incipio NGP iPhone XS Case; $20.
  • Spigen Liquid Crystal iPhone XS Max Case; $11.
  • Silk iPhone XR Clear Case; $12.
  • Spigen Liquid Air Armor Galaxy S9 Plus Case; $13.
  • LK Flexible Silicone Galaxy Note 9 Case; $10.
  • Spigen Liquid Crystal Google Pixel 2 XL Case; $13.
  • Aeska Ultra Flexible Soft Skin Silicone LG G7 Case; $8.
Shell cases
Shells
Hard and sturdy cases.
Amazon
Shell cases are thin, rigid, and great for shielding phones from scratches. The most valuable shells include silicone bumpers to help with shock absorption.
  • Incipio Reprieve [Sport] iPhone XS Case; $40.
  • Griffin Survivor Strong for iPhone XS Max; $30.
  • Spigen Ultra Hybrid Apple iPhone XR; $13.
  • Speck Presidio Grip Samsung Galaxy S9 Case; $33.
  • Anccer Galaxy Note 9 Case; $13.
  • Google Earth Live Case for Pixel 2 XL; $29.
  • OtterBox Symmetry Series Cell Phone Case for LG G7 ThinQ; $40.
Wallet cases
Wallet cases
Put your money where your phone is.
Amazon
If you’re like me and are trying to ditch your wallet, try a case that holds your essentials. These come in many styles, but most allow you to keep cash, credit cards, and your ID in a slot on the back or in a separate area of the case.
  • NOMAD Rugged Tri-Folio Case iPhone XS; $80.
  • Mujjo Full Leather Wallet iPhone XS Max Case; $55.
  • Case-Mate Leather Wallet Folio iPhone XR Case; $60.
  • Nomad Clear Folio Galaxy S9 Plus Case; $25.
  • Trianium Galaxy S9 Plus Wallet Case; $9.
  • Bellroy Leather Pixel 2 Wallet Case; $69.
  • ProCase Wallet LG G7 THINQ Flip Case; $11.
Leather cases
Leather cases
Suave.
Amazon
Leather cases are not amazing for protecting your phone from weather or drops, but they sure do look classy.
  • NOMAD Rugged Leather Case for iPhone XS; $45.
  • MUJJO iPhone XS Max Leather case; $50.
  • Case-Mate Barely There Leather iPhone XR Case; $40.
  • Leather Samsung Galaxy S9 Case; $39.
  • Solo Pelle Leather Samsung Galaxy Note 9 Case; $34.
  • Kaseta Leather Google Pixel 2 Leather Case $45.
Rugged cases
Tough, disaster-proof cases
For the active and/or over-protective.
Amazon

For the outdoor adventurer—or the very clumsy—you need a suit of armor for your phone. These rugged protectors shield devices from harsh weather, long falls, and unexpected swims.
  • OtterBox DEFENDER SERIES Case for iPhone Xs; $22.
  • Pelican Shield iPhone XS Max Case with Kevlar Brand fibers; $60.
  • Urban Armor Gear iPhone XR Case; $60.
  • Urban Armor Gear Galaxy S9 Case; $40.
  • Spigen Tough Armor Galaxy Note 9 Case; $16.
  • Spigen Tough Armor Google Pixel 2 XL Case; $18.
  • Supcase LG G7 THINQ Case; $20.
For the sporty types.
Fitness cases
For the sporty types.
Amazon
For bikers, runners, and other active folks, a case that's water(sweat)proof with an adjustable arm strap can make life easier. You can listen to music as you run without having to hold your phone in your hand.
  • Spigen 6-inch Universal Smartphone Sports Armband; $8.
  • Trianium Universal Smartphone Sports Armband; $10.
  • SUPCASE iPhone XS Sports Armband; $13.
  • Bone Universal Bike Phone Holder; $10.
  • WizGear Universal Magnetic Car Mount Phone Holder; $7.
  • Studio Proper iPhone Bike Mount (must be used with one of their cases); $40.


The UE Megaboom 3 Bluetooth speaker got better because of a button

$
0
0
You won't find Alexa or Google assistant inside this speaker and that's fine.


UE Megaboom 3
UE Megaboom 3
The large volume buttons in the vaguely cross-shape have stuck around.
Stan Horaczek

The original Ultimate Ears Boom speaker debuted way back in 2013. It was rugged, looked cool, and sounded surprisingly great in a time when most rugged bluetooth speakers had the autio quality of an FM radio tied up in a garbage bag. Now, UE is rolling out the Boom 3 and the bigger Megaboom 3, both of which carry on the tradition of excellent audio products you don’t mind taking into the shower.

What you won’t find inside those stylish, tubular bodies, however, is Alexa. Unlike the Blast and the Megablast that arrived late last year. Instead of voice control, the Boom 3 added a single programmable button, and in some ways, that’s just better than a full-on digital assistant.
UE Megaboom 3
The top of the speaker still has gunk on top of it from our testing, which included throwing it in a puddle.
Stan Horaczek

What is it?

The $150 Boom 3 and the $200 Megaboom 3 are portable, battery-powered speakers that are waterproof and shaped in such a way that they throw sound in every direction. The Megaboom 3 is a physically larger speaker—imagine a fatter version of those tall Arizona iced tea cans you can buy at the gas station—that weighs in at almost exactly two pounds compared to the 1.34 pound Megaboom 3.

All the gear you need to totally dominate the local mini golf course

$
0
0
Ditch the loaner club and upgrade your putt-putt game.


mini golf products
Mini-golf master stroke.
Ralph Smith

Your local putt-putt spot is a silly land of spinning windmills and laughing clowns. But the tacky surroundings don’t mean you can’t go all Jack Nicklaus and totally freakin’ dominate those baby greens. The putting skills you refine while dodging miniature Stonehenges can help on the big-kid course too. Here’s the pro-grade gear you need to destroy your friends at the shortest short game.

1. Practice first.

The PuttOut is a serious training tool. A well-struck ball, with the force and trajectory to go in the hole, will roll up the plastic ramp, then back toward you. Hit too hard and the ball will launch off the back; wayward shots will fall off the side.

2. Picture success

Markings on the Wellputt Pro mat help you visualize every element of a perfect hit. Guidelines on the 10-​foot-​long turf show the right backswing distance and ideal ball paths. The drills in the included book will help hone your skills.

3. Choose your tool

The Odyssey Exo Seven’s head is aluminum in the center and stainless steel around the edges, a design that puts more weight around its perimeter. That helps the face stay square so you can hit the ball straight.

4. Roll the right way

Don’t be fooled by the poppy covers on the Volvik Vivid balls: They’re not like the cheap rocks at a typical mini-golf course. An extra-soft exterior with a matte finish helps them grip the turf and roll true instead of sliding over the terrain.

5. Read the green


Thanks to their polarization and tint, the lenses in Oakley’s Targetline Prizm Golf sunglasses help reveal the bumps and curves in the putting surface. Thick arms also help block glare and any distracting friends.

Fingerprint Identification

$
0
0
Fingerprint identification is the method of identification based on the different patterns of human fingers, which is actually unique among each person. It is the most popular way of acquiring details of any person and is the most easy and convenient way of identifying a person. An advantage of fingerprint identification method is that the fingerprints pattern remains same for a person through out his/her life, making it an infallible method of human identification. The study of fingerprint identification is Dactyloscopy.

Defining fingerprints:

The skin surface of any human finger consists of a pattern of dark lines of ridges along with white lines or valleys between them. The ridges’ structures changes at points known as minutiae and can be either bifurcated or of short length or two ridges can end on a single point. These details or patterns are unique in every human being. The flow of this ridges, their features, the intricate details of ridges and their sequence is what defines the information for fingerprint identification.

Different ridge patterns are as given below:

FingerPrint pattern
FingerPrint pattern
Finger patterns can be divided into 3 groups as shown below
  •  Arches: Ridges enter and exit on same sides
Plain Arch
Plain Arch
  • Loops: Ridges enter on one side and exit on different side
Finger print cir
  • Whorls:  It consists of circles or mixture of pattern types.
Finger print circuit

Obtaining Finger prints:

There are two ways of obtaining latent prints or finger prints
  • Using chemical methods: Spraying the surface with black powder can reveal the finger print patterns which can then be lifted using a clear tape. Different chemicals like cyanoacrylate (which can develop fingerprints on a variety of objects), Ninhydrin (which bonds with amino acids present in finger prints, producing a blue or purple colour) can be used. Also magnetic powder can be used to reveal finger prints and works on shiny surfaces or plastic bags or containers.
  • Using Automatic Identification method: The fingerprint images can be acquired using different sensors. Examples are Capacitive sensors which obtain pixel value based on the capacitance of the fingerprint characteristics as each characteristic like a finger ridge has different capacitance, optical sensors which use prisms to detect change in reflectance of light by each characteristic and thermal scanners which measures the difference in temperature over time to create a digital image.

Finger print identification Process:

Basically digital imaging technology is used in acquiring, storing and analyzing the fingerprint data.
  • Acquiring Images:  As explained above, different sensors can be used to obtain finger print digital images. Basically the fingerprint scanner consists of a optical scanner or a capacitance scanner. The optical scanner consists of the charge coupled device which consists of light sensitive diodes which give electric signals when eliminated. The tiny dots representing the light that hit the spot are recorded as pixels and the array of pixels form the image. When we place our finger on a glass plate or monitor surface, the camera takes the picture by illuminating the ridges of the finger.

The left image given below shows the whole structure of the fingerprint acquisition using optical scanner and the right image is the real time example of the system.

Storing the images:  The acquired image is then processed using digital image processing techniques as explained below:
  • Image Segmentation: The acquired image tends to contain unwanted features along with the relevant features. To remove this, thresholding based on variance of each pixel in the image is done. The pixels having intensity (gray level value) greater than the threshold are considered where as the pixels having intensity lesser than the threshold are eliminated.
  • Image Normalization: Each pixel in the image has different mean variance. Hence to obtain a uniform pattern, normalization is done, so that the image pixels are in a desired range of gray values.
  • Image Orientation: It defines forming the image based on ridge orientation at each point. It is done by calculating the gradient of each pixel at x and y directions and then calculating the orientation by determining the average of vector orthogonal to the gradient.
  • Constructing the frequency image: It is done to determine the local frequency (rate of occurrence) of ridges. It is done by projecting the gray values of each pixel along with the direction perpendicular to the ridge orientation and then calculating the number of pixels between consecutive minimums in the waveform, which correspond to the ridges. Another way is using Fourier transform technique.
  • Image Filtering: It is done to remove unwanted noise. It is done either using a Gabor filter or a Butterworth filter. Basic way is convolving the image with the filter.
  • Image Binarisation: The filtered image is then converted to binary image using thresholding technique, to improve the contrast. It is based on global thresholding, i.e. pixel value greater than the threshold is set to 1 and pixel value less than , is set to 0.
  • Image thinning: It is done to eliminate foreground pixels until they are one pixel wide. It preserves the connectivity of the ridges.
Analyzing the Images:  It involves extracting the minutiae details from the processed image and then comparing them with the already stored image patterns in the data base. Minutiae extraction is done by calculating the crossing number or the half of sum of differences between pair of pixels in a eight connected neighborhood (eight connected means a pixel surrounded by eight pixels). The cross number gives a unique identification for each finger print characteristic.

The acquired image along with the extracted details are then compared with the existing details in the databases which can be tenprint or palm print records, for matching and if images or the details match, the person is identified.  The system provides a list of the closest matching fingerprint images from the tenprint database and the results are verified to determine if a identification is made.

Advantages of Fingerprint Identification:

  • It is highly accurate
  • It is unique and can never be same for two persons.
  • It is the most economical technique.
  • It is easy to use
  • Use of small storage space

Applications of Fingerprint Identification:

  • To identify criminals in crime scenes. It was one of major reasons for development of this technology by FBI in USA.
  • To identify members of an organization. It helps improves security such that only authenticated persons can enter the secured area and not any other members.
  • In Grocery stores to automatically recognize and bill a registered user’s credit card or debit card.
Photos Credit:
So, this is the brief idea about Finger Print Identification. Any further inputs like details about the processing techniques or on the electrical and electronic projects are welcome to be discussed…
Viewing all 1099 articles
Browse latest View live