Combining margin and calibration to effectively improve power consumption and power supply noise suppression

This application note provides an overview of electronic margin and its value in detecting potential system failures before the product leaves the factory. It is a calibration method that can effectively predict and allow adjustments to improve product quality. The deposit can also be used to classify products into performance levels, so that high-quality products can be sold at high prices. We discussed the shortcomings of classification and proposed alternative methods of separating products.

For companies that manufacture electronic products, quality control is an integral part of their systems. In order to gain favor and ensure positive feedback from customers, different manufacturing and sorting techniques are used throughout the production process. This can not only increase their profits, but also save them from the trouble of factory losses, and therefore, can reduce the cost of materials.

This application note provides an overview of electronic margin and its value in detecting potential system failures before the product leaves the factory. It is a calibration method that can effectively predict and allow adjustments to improve product quality. The deposit can also be used to classify products into performance levels, so that high-quality products can be sold at high prices. We discussed the shortcomings of classification and proposed alternative methods of separating products.

The term “margin” has many meanings. A common type of margins can be seen on clay slabs more than 5000 years ago, that is, the space around the printed text on the page. One of the most terrifying definitions of the term involves buying stocks on margin (borrowing money from brokers using stock purchases as collateral). Before the US stock market crash in 1929, people could borrow up to 90% of the value of stocks. If the stock price drops, a “margin call” requires people to pay to retain ownership of the stock. Failure to meet the margin call requirement means that the stock will be sold, and investors may lose all their money. Today, margin purchases are restricted to a small percentage. However, we will focus on the profitability of the computer and electronics industries. When the first microprocessor was installed on the motherboard to make computers, especially gamers and modders (modifiers) wanted faster speeds. Therefore, the “speed margin” was born.

Speed ​​margin (also called “push” or “overclocking”) is changing the computer’s system hardware settings to run at a speed higher than the manufacturer’s rating. This can usually be done at various locations in the system by adjusting the speed of the CPU, memory, video card or motherboard bus.

Usually, the chips are trimmed and tested by the manufacturer to determine how quickly they fail. Then, their rated speed is one step lower than this speed. IC manufacturers will try to make their products as fast as possible, because faster hardware can make more money. Statistically speaking, some ICs in the wafer may be able to run at higher speeds. Each chip is tested to see how fast it runs, and chips that run faster are marked as “higher speed.” Because the test is very strict and conservative (because the components must be guaranteed to run at the lowest speed), gamers believe that they can push the CPU to its rated speed a little faster while maintaining the stability of the system.

Overclockers also found that some IC manufacturers deliberately underestimated the chips to meet market demand and create a difference between high-end and low-end products. Sometimes, when manufacturers are out of stock, they will pack faster chips into slower chips to meet demand. Is overclocking always effective? Do not. However, overclockers tried because they were statistically successful.

The next step of the margin

As we discussed in the “Why the Digital Method is Analog” section of Application Note 4345, “Ground is good, and the digital method is analog.” Digital signals are more tolerant of noise and power levels than analog signals. This is because of the inherent thresholds of digital devices. As long as the signal is above or below the critical threshold level, analog equipment will be damaged immediately, while digital equipment will usually operate. In digital systems, the failure is sudden (cliff effect), because the deterioration of the signal is initially suppressed (by the threshold) until it deteriorates enough to severely destroy the data. There, it is necessary to test the performance margin to ensure that the product can work normally during its warranty period and under extreme conditions.

Margin is a technology that has been used for a long time in the computer industry. It proves the reliability of the digital process by testing under more difficult conditions than normal services. The degree of additional pressure that can be applied before failure is a measure of performance margin.

Margin technology can be successfully applied, and if used correctly, it will provide much-needed confidence. There are usually two ways to marginalize circuits: one is to use pathological data patterns. Depending on the system and encoding used, the data pattern of the long string “1” or “0” lacks clock data, which may affect the threshold requirements. The second, more general method is to change the power level (usually by lowering the voltage).

Calibration margin

In the design phase of the system or PCB, the designer must define the test protocol. The process will detail how to quantify the pressure applied to the device under test (DUT) and how to objectively classify the DUT into various performance levels or make the product unqualified (Table 1). The agreement should clear early infant failures and ensure that the DUT will function within its expected lifespan. A typical protocol will gradually reduce the power supply voltage, may change other parameters (such as clock speed) and monitor some key functional parameters. For example, restrictions can be set to divide the DUT into high-performance to low-performance units, and to delete malfunctioning devices.

use

Other methods of margin and calibration can meet other system specifications by combining margin and calibration. Parameters that can be improved include power consumption and power supply noise suppression (see Figure 1).

Combining margin and calibration to effectively improve power consumption and power supply noise suppression

Engineering always seems to be a trade-off between certain advantages and disadvantages. Linear (low dropout) regulators are quieter, but excess power is converted into heat. Switching regulators have higher power efficiency, but are noisier. Figure 1 uses calibration to get the best of both worlds. The tolerance of the power conditioner is usually in the range of 5% to 10%. Now, imagine a system with a margin that must minimize power consumption. The output of the two regulators are passed through a separate low-pass filter or decoupling to minimize noise. Most power supply regulators are optimized for DC and have sufficient frequency response to respond to power and load changes. They act as a feedback loop, comparing the output voltage with a reference voltage.

The MAX11600 series has an input multiplexer followed by an analog-to-digital converter (ADC), which can monitor up to 12 independent points. In order to minimize power consumption, we measure point A by setting the digital potentiometer (potentiometer) on the switch, which takes into account the voltage drop across the switch decoupling. Point B is measured by adjusting the digital potentiometer on the LDO to compensate for the voltage drop on the LDO decoupling network. By setting the switch just above the required LDO voltage, we can obtain relatively quiet power with relatively low power loss. In order to further reduce noise, the LDO can be replaced with a reference voltage, or a reference voltage can be added to the LDO output to power critical circuits (such as a low noise amplifier).

For companies that manufacture electronic products, quality control is an integral part of their systems. In order to gain favor and ensure positive feedback from customers, different manufacturing and sorting techniques are used throughout the production process. This can not only increase their profits, but also save them from the trouble of factory losses, and therefore, can reduce the cost of materials.

This application note provides an overview of electronic margin and its value in detecting potential system failures before the product leaves the factory. It is a calibration method that can effectively predict and allow adjustments to improve product quality. The deposit can also be used to classify products into performance levels, so that high-quality products can be sold at high prices. We discussed the shortcomings of classification and proposed alternative methods of separating products.

The term “margin” has many meanings. A common type of margins can be seen on clay slabs more than 5000 years ago, that is, the space around the printed text on the page. One of the most terrifying definitions of the term involves buying stocks on margin (borrowing money from brokers using stock purchases as collateral). Before the US stock market crash in 1929, people could borrow up to 90% of the value of stocks. If the stock price drops, a “margin call” requires people to pay to retain ownership of the stock. Failure to meet the margin call requirement means that the stock will be sold, and investors may lose all their money. Today, margin purchases are restricted to a small percentage. However, we will focus on the profitability of the computer and electronics industries. When the first microprocessor was installed on the motherboard to make computers, especially gamers and modders (modifiers) wanted faster speeds. Therefore, the “speed margin” was born.

Speed ​​margin (also called “push” or “overclocking”) is changing the computer’s system hardware settings to run at a speed higher than the manufacturer’s rating. This can usually be done at various locations in the system by adjusting the speed of the CPU, memory, video card or motherboard bus.

Usually, the chips are trimmed and tested by the manufacturer to determine how quickly they fail. Then, their rated speed is one step lower than this speed. IC manufacturers will try to make their products as fast as possible, because faster hardware can make more money. Statistically speaking, some ICs in the wafer may be able to run at higher speeds. Each chip is tested to see how fast it runs, and chips that run faster are marked as “higher speed.” Because the test is very strict and conservative (because the components must be guaranteed to run at the lowest speed), gamers believe that they can push the CPU to its rated speed a little faster while maintaining the stability of the system.

Overclockers also found that some IC manufacturers deliberately underestimated the chip to meet market demand and create a difference between high-end and low-end products. Sometimes, when manufacturers are out of stock, they will pack faster chips into slower chips to meet demand. Is overclocking always effective? Do not. However, overclockers tried because they were statistically successful.

The next step of the margin

As we discussed in the “Why the Digital Method is Analog” section of Application Note 4345, “Ground is good, and the digital method is analog.” Digital signals are more tolerant of noise and power levels than analog signals. This is because of the inherent thresholds of digital devices. As long as the signal is above or below the critical threshold level, analog equipment will be damaged immediately, while digital equipment will usually operate. In digital systems, the failure is sudden (cliff effect), because the deterioration of the signal is initially suppressed (by the threshold) until it deteriorates enough to severely destroy the data. There, it is necessary to test the performance margin to ensure that the product can work normally during its warranty period and under extreme conditions.

Margin is a technology that has been used for a long time in the computer industry. It proves the reliability of the digital process by testing under more difficult conditions than normal services. The degree of additional pressure that can be applied before failure is a measure of performance margin.

Margin technology can be successfully applied, and if used correctly, it will provide much-needed confidence. There are usually two ways to marginalize circuits: one is to use pathological data patterns. Depending on the system and encoding used, the data pattern of the long string “1” or “0” lacks clock data, which may affect the threshold requirements. The second, more general method is to change the power level (usually by lowering the voltage).

Calibration margin

In the design phase of the system or PCB, the designer must define the test protocol. The process will detail how to quantify the pressure applied to the device under test (DUT) and how to objectively classify the DUT into various performance levels or make the product unqualified (Table 1). The agreement should clear early infant failures and ensure that the DUT will function within its expected lifespan. A typical protocol will gradually reduce the power supply voltage, may change other parameters (such as clock speed) and monitor some key functional parameters. For example, restrictions can be set to divide the DUT into high-performance to low-performance units, and to delete malfunctioning devices.

use

Other methods of margin and calibration can meet other system specifications by combining margin and calibration. Parameters that can be improved include power consumption and power supply noise suppression (see Figure 1).

Combining margin and calibration to effectively improve power consumption and power supply noise suppression

Engineering always seems to have to weigh certain advantages and disadvantages. Linear (low dropout) regulators are quieter, but excess power is converted into heat. Switching regulators have higher power efficiency, but are noisier. Figure 1 uses calibration to get the best of both worlds. The tolerance of the power conditioner is usually in the range of 5% to 10%. Now, imagine a system with a margin that must minimize power consumption. The output of the two regulators are passed through a separate low-pass filter or decoupling to minimize noise. Most power supply regulators are optimized for DC and have sufficient frequency response to respond to power and load changes. They act as a feedback loop, comparing the output voltage with a reference voltage.

The MAX11600 series has an input multiplexer followed by an analog-to-digital converter (ADC), which can monitor up to 12 independent points. In order to minimize power consumption, we measure point A by setting the digital potentiometer (potentiometer) on the switch, which takes into account the voltage drop across the switch decoupling. Point B is measured by adjusting the digital potentiometer on the LDO to compensate for the voltage drop on the LDO decoupling network. By setting the switch just above the required LDO voltage, we can obtain relatively quiet power with relatively low power loss. In order to further reduce noise, the LDO can be replaced with a reference voltage, or a reference voltage can be added to the LDO output to power critical circuits (such as a low noise amplifier).

The Links:   LQ9D011 6MBI100S-120

Related Posts