Interviews and Articles
Transcription
Interviews and Articles
® pink is the new green enabling energy efficient solutions - Interviews and Articles Sensor module design improves automotive electrical integration, functionality By Torsten Herz (ZMDI) June 2011 in EETimes Sensor module design improves automotive electrical integration, functionality (Part 1) Torsten Herz, ZMDI 6/24/2011 01:58 AM EDT Thanks to state-of-the-art sensor-based control systems that provide precise real-time monitoring, automotive engines operate more efficiently and with lower environmental impact. One result of this improved performance is that the number of sensor applications in vehicles has realized double-digit growth over the past several years. The other result is a growing trend to add more sensor modules to vehicles. Such modules must be reliable and robust and operate with long-term stability and high precision under harsh physical, chemical, and electrical stress conditions. Additionally, a set of built-in-diagnostic functions is required for automotive sensor modules to support the “maintenance-on-demand” policy of automotive OEMs as well as special failure-mode-operations required for safety-critical sensor applications like brake pressure sensing. The chemical (i.e. media/humidity/corrosion resistance) and physical (shock; vibrations) robustness of sensor modules is mainly determined by the materials used, and the assembly and connection technologies. The electrical robustness (i.e. EMC) is determined by the application circuit, the chosen electric components (ICs, discrete parts), and the layout of the electrical connections, according to the application circuit. This series will describe the latter aspect of the design of an automotive sensor module. The module incorporates a sensor signal conditioner (ZSC31150) to enable the design of highly accurate sensor modules operating at temperatures of -40 to +150C and providing EMC performance and a set of onchip-protection and diagnostic features addressing safety-critical applications at SIL2-level. By clever electrical design of the sensor module considering all EMC-related parameters (i.e. parasitic capacitances and inductances), high electrical robustness and built-in-diagnostic functionality can be achieved at optimized module cost, together with very high accuracy of the measured signal. Because the mechanical design and the interconnection between a sensor system and the processing unit have a major influence on their electromagnetic behavior, it is essential to separate “embedded sensing functions” and “stand-alone-sensor modules”. In case of embedded sensing functions (ESF) the sensor electronics are placed closed to the processing unit—in automotive applications this is an ECU (Electronic Control Unit). The connections between ESF and ECU are typically very short (<< 30 cm) and normally realized as traces on a PCB. Modern ESF provide a digital interface (i.e. SPI), which is connected to the microcontroller of the ECU. Because of this closed placement on the same PCB there are several options in order to fulfill the tough automotive requirements in terms of EMC (i.e. shielding or use of external protection parts). One example for an ESF is barometric pressure sensing. For stand-alone-sensor modules (SASEM), the situation is completely different. These are typically connected to an ECU via an unshielded harness of up to 2.5 meters in length. The available board space inside the module’s case (made of metal or plastic) is very limited and trends to further miniaturization because lower material consumption equals lower weight, which in turn equals lower cost. Depending on the mode of power supply (battery-powered or ECU-powered) there are various output interfaces: Battery-powered SASEM: • • • • • PWM output (high-side-load) PWM output (low-side-load) CAN-bus interface LIN-bus interface Absolute analog voltage output ECU-powered SASEM: • • • Ratiometric analog voltage output SENT interface (fast digital unidirectional point-to-point data transfer) PSI5 interface (digital 2-wire-current-coded data transfer) Typical construction of an automotive pressure sensor module For passenger cars it is still very common to use ECU-powered SASEMs, which provide a ratiometric analog voltage output. The typical supply voltage amounts to 5 VDC ±10% and the current consumption of a SASEM should amount to ≤10 mA. The operational conditions are quite harsh as mentioned at the beginning, which leads to the exclusion of some effective passive protection parts such as ferrite beads, which operate at temperatures only up to +125C. Depending on the module’s design (i.e. the material of the module’s case), two additional 10nF (maximum) capacitors (shown in green in the figure below) at the differential inputs VINP and VINN to VSSA might be required in order to fulfill the EMC specification of the SASEM—this leads us to typical automotive EMC requirements. ZSC31150 automotive application circuit Basically the electromagnetic characteristic of systems like SASEMs is split into areas—electromagnetic emissions (conducted or radiated) and electromagnetic immunity (conducted or radiated). The limitation of electromagnetic emissions ensures that other electrical systems are not disturbed by operation of a SASEM. Thus, the active electronics inside a SASEM determine its “emission performance.” By proper IC design and at digital on-chip-clock frequencies <5 MHz (i.e. ZSC31150 for DSP-on-chip typically operates at 3 MHz), common ISO- and OEM-standards for electromagnetic emissions of SASEMs can be fulfilled. Electromagnetic immunity In terms of electromagnetic immunity against continuous or transient RF energy, there are several standardized test methods for both conducted and radiated modes of RF-energy transfer to the SASEM. Because of the small dimensions of the SASEM itself and of its internal conductive parts, there is no effective RF antenna for radiating RF energy up to 1 GHz—all dimensions are smaller than the length la of an equivalent λ/4 dipole. On average, this length is approximately 50 mm at 1 GHz as calculated with equation (1). Up to 1 GHz, the primary effective antenna for RF energy is the SASEM’s harness. However, there is a trend to expand the EMC test procedure frequency range to 3 GHz (or more). In this case, the effective length la of an equivalent λ/4 dipol decreases to approximately 20mm as calculated by equation (1), and conductive structures on the module’s PCB with a length >15mm can be an effective antenna for radiated RF energy. To prevent susceptibility at field strengths up to 600V/m, shielding of sensitive signal paths might be required. Their susceptibility is measured by EMC test procedures. There are different test configurations for radiated and conducted immunity (i.e. stripline, anechoic chamber, bulk current injection, etc). One of the toughest tests for common automotive SASEMs regarding immunity against continuously applied RF energy is the Bulk Current Injection (BCI) test, which belongs to the radiated immunity EMC test group. Typically the frequency range tested is 1 to 400 MHz. The test simulates worst case conditions for RF cross-coupling in a harness for different electric subsystem’s wires assembled inside a car. Because of the small distance between RF source (emitting harness or wire) and RF sink (harness of the sensor module), the induced energy can be very high and is measured in “mA” or “dBµA” during the BCI test. To ensure the induced energy can influence only the sensor module during the test, the ECU is replaced by a standardized artificial network and typical circuitry at VSIGNAL, which represents the input impedance of the original ECU used in the car as shown below. BCI test circuitry and equivalent RF circuitry of the SAREM for the circuit in the first figure on the previous page and Case 1 in the table to follow It is important to note, customizing this circuitry for each EMC test before designing the module is strongly recommended because different EMC test circuitries can make different module designs necessary. Typically “universal” solutions are too expensive. To fulfill the harsh automotive EMC requirements, all relevant electrical parasitics, especially capacitances between the electric sensor circuitry and other conductive parts of the SASEM, need to be considered as shown in the figure above. There are a number of different configurations possible for the module’s construction and its assembly inside the car as listed in Table 1 below. The case and the pressure supply adaptor (PSA) can each be plastic or metal and each can have a galvanic contact with the chassis or no contact. Table 1: Possible configurations of module construction and automotive assembly In Table 1, configurations 1 and 10 represent the extremes regarding the equivalent RF circuitry at the BCI test. With configuration 1, all parasitic impedances are maximums; with configuration 10, they are minimal or short circuited. The first consideration is the electromagnetic coupling between the BCI antenna and the harness. If the frequency of the RF current IRF is in the range of the initial resonance frequency of the segment of harness between the RF-emitting BCI antenna and the DUT, then the induced current IRF_sink is maximum. The induced current value is determined by the parasitic impedances, especially by ZC_GND. As IRF_sink increases, its influence on the DUT becomes stronger. The worst case is configuration 10, because ZC_GND = 0 Ω (galvanic contact between case and the car’s chassis) and ZPSA_C = 0 Ω (galvanic contact between PSA and case). In this case IRF_sink is limited by the impedance of the parasitic capacitances of the DUT’s signal paths V+, VOUT, and V- relative to the case and of the sensor bridge relative to the PSA. But there are additional parasitic capacitances (i.e. the internal signal paths relative to the case), which could also decrease the RF susceptibility of the DUT. An example: • • • Tolerance allowed for the analog output voltage of the DUT = ± 40 mV (nominal value) Effective gain “G” of the SSC-IC: G = 400 DC bridge resistance = 4 kΩ / resulting AC bridge impedance at its differential terminals = 2 kΩ Thus the limit of the differential bridge voltage’ s change caused by RF energy: (± 40 mV / G) = ± 0.1 mV. And the resulting limit of the difference between the bridge’s partial currents: ± 0.1mV/2kΩ = ± 50nA! This very simplified example illustrates the influence of the mechanical construction and selected materials on the EMC behavior of the sensor module. It is even more challenging to define parasitics under the conditions of high volume automotive production with consideration for the system’s cost. Torsten Herz, is FAE manager at ZMDI. Smart sensor for high-resolution high-precision noise ratio measuring performance digital signal correction By Marko Mailand (ZMDI) February, 2012 in Semiconductor Network Application Review 고해상도 저잡음 정밀 스마트 센서 위한 비율계량성과 디지털 신호 보정 특수한 아날로그 및 디지털 센서 신호 처리 개념을 사용하여 간섭에 대해 내성이 있는 고정밀도 센서 신호 측정을 지원할 수 있다. 비율계량성(ratiometricity) 또는 신호 조정 등과 같은 제시된 개념들을 적절히 활용함으로써 에너지 효율적인 고성능 표준 솔루션을 신속하게 개발할 수 있다. 글/마르코 마일랜드(Marko Mailand) 의료, 소비가전, 산업 사업부, ZMDI 센 서 및 센서 시스템에 대한 오늘날의 고객들은 모듈 크기, 동작 복잡성, 가격, 에너지 소모 등은 물론 전체 비용의 절감과 같은 성능 파라미터가 향상되기를 기 대하고 있다. 압력, 온도, 무게, 유량, 토크, 진동, 장력, 변형 등과 같은 환경 조건들을 결정할 때 정보와 및 성능 성능을 제공하고 있다. 예를 들어, 궁극적으로 최종 측정 요구사항에 대한 일반적으로 끊임없이 증가하고 있는 요 결과가 전체 신호 범위의 수십 %에 이르는 잡음을 나타낼 구로 인해 소비가전과 산업용 애플리케이션 모두에 대한 지라도 기업들은 일반적으로 16bit 신호 해상도를 제공하 요구 역시 지속적으로 증가하고 있다. 이것은 결과적으로 는 인터페이스 또는 신호 조정 IC를 광고하여 제공하고 있 센서 민감도, 해상도, 간섭 내성, 정밀도 등에 대한 보다 다. 이러한 경우에 사용자들은 요구되는 성능을 가상 형태 높은 요구로 이어진다. 이러한 맥락에서 직접 버스 연결 로만 확인할 수 있는 데, 최종 측정 결과의 낮은 신호 품질 을 제공하는‘스마트 센서’시스템 개념은 최근 수년 동 로 인해 예를 들면 원래의 범위에서 사실상 10bit에서 안 지속적으로 폭넓게 수용되어 왔다. 이러한 시스템 접 12bit 정도에 불과한 유효 해상도만이 제공되기 때문이다. 근법은 일반적으로 다음과 같은 기능 요소들로 구성되어 이러한 이유로 인해 시스템 개념들뿐만 아니라 회로별 아 있다: 센서, 아날로그 신호 조정(증폭, 오프셋 보정 등), 날로그 간섭의 제거, 보상, 또는 적어도 최소화가 여전히 아날로그-대-디지털 변환, 디지털 신호 보정, 버스 인터 필요하고, 다시 말해 보다 소형화된 기술로 이동하는 경우 페이스, 디지털 분석. 에 반복적으로 중요한 태스크가 되고 있다. 현재 스마트 센서는 특히 고정밀도 센서 애플리케이션 다행스러운 것은 기반 기술에 상관 없이 고해상도의 과 함께 사용될 경우에 시장에서 출시되고 있는 새로운 제 에너지 효율적인 저잡음 스마트 센서를 구현할 수 있는 품들을 위한 사실상의 표준으로 간주되고 있지만, 여전히 유효하고 매우 효과적인 회로 토폴로지와 접근법이 존재 실제 신호 조정 및 처리와 관련하여 매우 다양한 수준의 한다는 것이다. 90 Semiconductor Network 2012.2 고해상도 저잡음 정밀 스마트 센서 위한 비율계량성과 디지털 신호 보정 비율계량성(ratiometricity) 전압 VDD의 IC-내부 절대 수준이 변화하는 경우에도 A2D-컨버터의 출력 Zout에 대한 스퓨리어스 영향은 나타 비율계량 측정 원리는 전력공급에서 간섭 현상을 제 거하는 데 일반적으로 사용되는 개념이다. 비율계량 측정 나지 않는다. 원칙적으로 다음 식을 이 경우에 적용할 수 있다: 방법에서 요구되는 측정 양은 일반적으로 간섭을 나타내 는 2개의 양의 비율이다. 하지만, 이와 관련해서 간섭이 ( Zout =2resolution· GAMP· 실제 측정에 영향을 미치지 않는다는 것이 중요하다. 예 를 들어, 비율계량 값은 공급전압에 대해 독립적이다. ) Voff VIN + Vrp -Vrn , Vrp -Vrn 여기서 GAMP은 증폭을, Voff은 신호 경로 내의 내부 오프 그림 1은 측정된 전압 V1과 V2에 대한 저항 R1과 R2의 셋을 나타낸다. 뿐만 아니라, 향후 SSC 세대를 위해 개념 비율이 공급전압의 절대값 VDD에 대해 독립적이라는 것 들의 적용 가능성이 최적의 전압 레귤레이터를 사용하여 을 보여주고 있다. 결과적 저전력 공급전압을 한층 더 억제하기 위한 학계와 산업의 으로 R1의 값을 알고 있을 그림 1. 비율계량 측정 회로 의 기본 예제 연구 과제이다. 따라서 LDO(low dropout regulator)를 경우에 전압의 비율을 측 통해 스마트폰과 같이 상당한 수준의 간섭이 나타나는 환 정한 다음 공식: R2 = R1· 경들에서 고해상도 저전력 센서 시스템을 사용할 수 있 V2/V1을 사용하여 저항 R2 다. 이와 관련하여 전압 레귤레이터는 신호 경로의 기성 를 결정할 수 있다. 커패시턴스로 인한 동적 손실을 감소시켜 16bit에서 시스템-통합 접근법의 24bit까지의 유효 해상도와 각 실리콘-공정과 관련된 최 경우, 이 원리를 확장하여 소 트랜지스터 공급 전압까지 동작 전압을 제공하면서 동 복잡한 센서 인터페이스와 시에 비율계량 신호 경로를 활용하는 시스템을 제공할 수 SSC(sensor signal conditioning) IC(예를 들어, ZMDI 있다. 의 ZSI21013와 ZSSC30xx, MAXIM의 MAX1452, ATMEL의 AT77C104Bx 등이 있다)에서 사용할 수 있다. 신호 조정 및 AZ(auto-zero) 조정 비율계량 토폴로지를 통해 공급전압 간섭에 대해 근본적 으로 내성을 가지고 있으면서 16bit의 유효 신호 해상도를 아날로그 성능 파라미터들 외에도 디지털 신호를 보 제공하는 거의 잡음이 없는 애플리케이션을 지원할 수 있 정할 수 있는 표준 SSC의 성능 역시 매우 중요하다. 일반 다. 기본적인 비율계량 원리를 SSC의 증폭기와 ADC 적으로 센서 시스템들은 센서 요소 자체의 특성은 물론 (analog-digital converter) 에 적용할 수 있다. 이 경우, 그림 2. 저항 브리지 센서 신호 측정을 위한 비율계량 토폴로지 내부 IC 레퍼런스 전압 Vref 또는 Vrp 및 Vrn를 저항 브리 지 센서 요소의 공급전압 VDD 로부터 직접 얻을 수 있다(그 림 2). 결과적으로 VDD에 대 한 간섭이 시스템적으로 ADC의 입력 전압에 대한 센 서 전압 VIN의 비율에 영향을 미치지 않는다. 따라서, 공급 2012.2 Semiconductor Network 91 Application Review 실제 가변 측정 값(기압, 수압, 비틀림 진동 등)으로 인해 Wire Interface), I2C, SPI 등과 같은 유연한 디지털 인 고유한 비선형성을 나타낸다. 뿐만 아니라, 센서 신호와 터페이스를 제공한다. 일반적으로 프로그램 가능한 해상 환경 또는 센서 시스템 온도 사이에는 비선형적인 관련성 도와 분할을 제공하는 CB(charge-balancing) 아키텍처 이 나타난다(저항 센서에 대해서는 적용되지 않는다). 가 보다 낮은 샘플링 속도를 제공하는 저전력 애플리케이 결과적인 측정 값을 선형화시켜 최적의 방법으로 연 션의 ADC를 위한 기본 IP로서 사용되며, 시그마-델타 속적인 분석을 지원하기 위해서 최신 SSC들은 수많은 신 접근법들은 1k sps(sample per second) 이상의 샘플링 호 조정 계수들을 사용하는 디지털 처리 유닛을 특별히 속도를 제공하면서 상대적으로 전력이 중요하지 않은 스 채택하고 있다. 해당 요구 보정 지점들은 각 센서 IC에 마트 센서 시스템에 채택된다. 따라 달라지며, 개별적으로 획득되어야 하기 때문에 센서 분할된 CB-ADC의 경우, 완전 MSB(most significant 시스템의 어셈블리 시에 일반적으로 수행된다. 뿐만 아니 bit) 변환과 통합 MSB/LSB(least significant bit) 변환 라, 이러한 강화된 SSC는 통합 온도 센서를 제공하여 통 사이에서 선택할 수 있다. 두 경우 모두, 특정 영역의 애 합 브리지 센서와 온도 신호 보정의 모든 이점들을 유지 플리케이션을 위한 최종 측정 값에서 변환 속도와 추가적 하면서 BOM(bill of material)을 최소화시킨다. 인 잡음 감소에 대한 알맞은 비율을 선택하여 지정할 수 ‘AZ(auto-zero) 측정’ 을 사용하여 내부 회로 신호 오프 있다. 아날로그 전치-증폭(정밀하게 프로그래밍될 수 있 셋 Voff을 계산할 수 있기 때문에 최종적으로 센서 신호를 음) 및 조정 가능한 ADC 입력 오프셋 시프팅을 사용하여 실제 요구되 이와 같은 IC들을 환경적인 신호와 센서 요소 특성들(특 는 값으로 보 히 오프셋, 민감도, 측정 범위 등)에 따라 결정되는 다양 정할 수 있다. 한 신속 곡선들에 대해 최적화시킬 수 있다. 무엇보다 표 이렇게 하기 준이지만 애플리케이션 지정 IC로서의 이용 가능성 때문 위해서 신호 에 특정 시장에서 임지 경쟁이 시작되었다: 각 SSC 회로 경로를 IC 입 들에 대한 지속적인 기술 향상(기능 및 파라미터), 소형 력에서 직접 화, 저비용화. 결과적으로 이러한 기본적이고 고유한 사 단락시킨다 실은 새로운 미래형 스마트 센서와 개별 애플리케이션들 (그림 3). 신 의 개발을 위해 공통적으로 이용 가능한 다양한 센서 신 호 보정뿐만 호 조정 IC를 제공한다. 그림 3. 센서 시스템의 보정. 고장 영향 보상 및 선형화 아 니 라 AZ 측정 역시 시스템 안정성, 드리프트 동작 등과 같은 파라 에너지 효율은 필수사항 미터들을 모니터링하기 위한 고유한 애플리케이션 진단 기능들을 지원한다. 최대 1mA의 전류 소모 특성(A2D 컨버터 등)으로 최 이러한 방법들 덕분에 비선형 및 온도 민감 변수들과 소 1.8V의 낮은 공급전압 조건에서 동작하는 것은 기존 센서 신호 모두를 연결된 실제 정보 처리 단을 위해 이상 및 향후 SSC를 위한 오늘날의 표준 요구사항이자 최신 적으로 준비할 수 있다(그림 3). 기술이다. 가능한 그 이상으로 에너지 효율적인 센서 애 플리케이션을 만들기 위한 하나의 접근법은 SSC가 다양 표준 기능 한 동작 모드를 제공하도록 하는 것이다. 이와 관련하여 일반적으로 사용되는 3가지 주요 모드는 다음과 같다(그 앞서 언급한 특성들, 현재 및 미래의 센서 인터페이 스, SSC 회로 등은 업계-표준을 준수하며, OWI(One- 92 Semiconductor Network 2012.2 림 4와 비교). •연속/업데이트 모드: 모든 IC-내부 블록들에 지속 고해상도 저잡음 정밀 스마트 센서 위한 비율계량성과 디지털 신호 보정 적으로 전력이 공급된다. 측정 요구에 대한 IC 그림 4. 일반적인 SSC 동작 모드 반응이 최대이다. 심지어 A2D-변환이 수행되 지 않는‘비활성’기간에도 전류가 소모된다. 이와 관련하여 추가적인 측정 요구 명령어 없이 주기적인 업데이트 측정이 수행된다. 그에 맞게 각 결과들을 조사할 수 있다. •슬립/웨이크-업 모드: 거의 인터페이스만이 디 지털 인터페이스 버스에 집중하고 있다. 유효한 명령어를 수신한 경우에만 개별적으로 필요한 IC-블록들에 전력이 공급되고, 명령어 요청, 예 를 들어 센서 측정 수행 등이 처리된다. 따라서, IC-활성화가 필요하지 않을 경우에는 대기 전 류만이 소비된다. 반면, 명령어 요청에 대한 응 답 시간이 연속 또는 업데이트 모드 대비 다소 늦어진다. •명령어/테스트 모드: 모든 IC-내부 블록들에 전력이 공급되지만 명령어에 의해 오프 상태로 전환될 수 있다. IC 시스템 아키텍처에 대한 특수한 계산 유닛 등보다 넓은 지식과 정보가 필요하다. 일반적으로 이러한 종류 공급전압 범위에 대해 의 동작 모드는 테스트 목적을 위해 사용되거나 개 설계된다. 후자들이 최 별 고객의 SSI 및 SSC 디바이스에 대한 IC-제조업 소(내부 레귤레이션) 공 체의 애플리케이션별 지원을 가능하게 한다. 급전압 조건에서 동작 그림 5. 헤더 한다. 일반적으로 SSC 예를 들어, ZMDI의 ZSSC3016의 경우, 특히 슬립 모 -디바이스가 외부 연 드가 평균 전력소모를 최소화시킨다. 슬립 모드에서 회로 결 센서 요소들에 대한 는 사실상 파워-다운 상태(1μ A 미만의 전류 소모 가능)이 공급전압(최소 내부 레 기 때문에 버스 명령어 또는 적절한 회로 ID를 수신한 경 귤레이션 공급전압임) 우에 1초 이내에 웨이크-업 상태가 될 수 있다. 웨이크- 도 제공한다. 결과적으 업 상태가 되면 IC가 즉시 대기 모드로 복귀한 다음 완벽 로 센서 요소의 전류 소모 특성 역시 SSC의 낮은 다운-레 한 센서 측정이 수행된다. 인터페이스 프로토콜에 따라서 귤레이션 공급전압에 의해 낮아진다. 결과적으로 초당 1 결과적인 측정 값을 대기(저전력) 모드에서도 평가할 수 회의 벤치 테스트 시나리오에서 이용 가능한 첨단 회로들 있다. 유사한 전력 감소 성능이 MAXIM, ATMEL 등의 이 100μ A 이하의 평균 전류 소모 특성을 제공한다. IC들에서 제공되고 있는 것으로 알려져 있다. 마지막으로 최신 SSC-회로 제품의 출시로 인해 최근 전체 전력 소모 특성에 영향을 주어 최소화시킬 수 있 까지 ASIC-기반 또는 개별 칩 솔루션을 통해서만 제공 는 추가적인 접근법은 전압 영역 분할(voltage domain 되었던 성능 파라미터들과 지원하면서도 크기가 최적화 sectioning)이다. 따라서, 레귤레이터, 리셋 블록, IC의 되고 에너지 효율적인 스마트 센서를 개발할 수 있는 표 인터페이스 등이 아날로그 센서 프론트-엔드 및 디지털 준 IC 시장이 제공되게 되었다. SN 2012.2 Semiconductor Network 93 03/2012 D 19067 · März 2012 · Einzelpreis 19,00 € · www.elektronik-industrie.de Das Entwickler-Magazin von all-electronics Österreich-Special Die Elektronikindustrie im Fokus: Österreichs Firmen präsentieren sich und ihre Produkte Seite 16 Stromversorgungen Ursache und Minderung des Grundrauschen von DC/DC-Schaltwandlern Seite 28 Nutzsignalauflösung: 16 effektive Bit Analog-/Mixed-Signal-ICs Energieeffizienz und störfeste Signalverarbeitung für hochgenaue intelligente Sensoren Seite 68 By Marko Mailand (ZMDI) March, 2012 in elektronik industrie Selber machen lohnt nicht mehr Punktsieg für modulare DC/DC-Wandler Seite 24 eige Anz Kostenloser Versand Für Bestellungen www.elektronik-industrie.de elektronik industrie 03/2012 3 Über 65 €! DIGIKEY.COM 01_U1-Titelseite.indd 3 06.03.2012 15:43:57 012712_FRSH_EIND_DE_snipe.indd 1 1/27/12 12:08 PM Analoge-/Mixed-Signal-ICs Nutzsignalauflösung: 16 effektive Bit Energieeffizienz und störfeste Sensor-Signalverarbeitung Die Erweiterung bekannter, analoger und digitaler Sensorsignalverarbeitungskonzepte mit gezielten Energiesparlösungen ermöglicht störfeste, hochgenaue Sensorsignalmessungen bei reduzierter Leistungsaufnahme. Die Umsetzung der hier adressierten Konzepte ebnet den Weg für energieeffiziente High-Performance-StandardLösungen im Bereich der Smarten/Intelligenten Sensoren. Autor: Dr. Marko Mailand H eutige Marktanforderungen an Sensoren und Sensorsysteme erwarten steigende Leistungsparameter bei sinkenden Gesamtkosten: Modulgröße, Bedienkomplexität, Preis und Energieverbrauch. Die Ermittlung von Umgebungseigenschaften, wie beispielsweise Druck, Temperatur, Gewicht, Durchfluss, Drehmoment, Vibration, Tension, Dehnung, etc. führen dabei sowohl im Consumer-Bereich als auch im Industriesektor zu stetig wachsenden Ansprüchen an die Empfindlichkeit bzw. Auflösung, Störfreiheit und Genauigkeit. In diesem Zusammenhang hat sich das Systemkonzept des intelligenten Sensors (smart sensor) mit direkter Busanbindung in den letzten Jahren immer mehr etabliert. Intelligente Sensoren setzen sich dabei prinzipiell aus den Funktionselementen: Sensor, analoge Signalaufbereitung (zum Beispiel Verstärkung, Offsetkorrektur) Analog-Digital-Wandlung, digitale Signalkorrektur und digitale Auswertung zusammen. Während insbesondere für hochgenaue Sensorapplikationen der smarte bzw. intelligente Sensor de facto als Standardkonzept für Neuerscheinungen am Markt gilt, existiert noch immer eine sehr 68 elektronik industrie 03/2012 68_ZMDI 595 jj.indd 68 unterschiedliche Leistungsbandbreite, was die eigentliche Signalaufbereitung und -verarbeitung und insbesondere die Leistungsaufnahme angeht. So ist es beim Übergang zu kleineren Technologien immer noch und immer wieder eine Hauptaufgabe, alle schaltungsspezifischen, analogen Störeinflüsse zu eliminieren, zu kompensieren oder zumindest zu minimieren. Anderseits sind bewährte Konzepte und Lösungen zu verändern, um den Forderungen nach Energieeffizienz nachzukommen. Häufig führt dies zu konträren Lösungskonzepten. Nichtsdestotrotz existieren Schaltungstopologien und -ansätze die technologieunabhängig ihre Gültigkeit und insbesondere ihre Wirksamkeit für die Realisierung von hochauflösenden, energieeffizienten, rauscharmen, intelligenten Sensoren behalten. Einfacher Ansatz – Große Wirkung Ein vielfach eingesetztes Konzept zur Beseitigung von Störeinflüssen auf der Spannungsversorgung ist das ratiometrische Messprinzip. Ratiometrische Messungen zeichnen sich dadurch aus, dass das Messergebnis als Quotient zweier Größen gesucht ist, welches www.elektronik-industrie.de 02.03.2012 15:51:08 typischerweise von Störungen überlagert ist. Dabei ist jedoch ausschlaggebend, dass die Störungsüberlagerung die eigentliche Messung nicht beeinflusst. Eine ratiometrische Größe ist zum Beispiel unabhängig von der Versorgungsspannung. Bild 1 zeigt am einfachen Beispiel, dass das Verhältnis der gemessenen Spannungen V1 und V2 an den Widerständen R1 und R2 unabhängig vom Absolutwert der Betriebsspannung VDD ist. Somit kann bei bekanntem Wert für R1 durch Messung des Spannungsverhältnisses auf das Widerstandsverhältnis bzw. auf R2 geschlossen werden, wobei gilt: R2 = R1 x V2 / V1. Genau dieses Grundprinzip wird in Sensorinterface- und Sensor-Signal-Conditioning Standardschaltkreisen (SSC) von ZMDI (beispielsweise ZSSC3016 und ZSSC3017) eingesetzt, um quasi rauschfreie und betriebsspannungs-störfeste Applikationen mit einer Nutzsignalauflösung von effektiven 16 Bit zu ermöglichen. Als Erweiterung des ratiometrischen Grundprinzips werden hierbei die IC-internen Referenzspannungen beispielsweise für den Verstärker und den Analog-DigitalWandler (ADC) direkt von der entsprechenden Versorgungsspannung VDDB des resistiven Brücken-Sensorelements abgeleitet (Bild 2). In Folge dessen wirken sich Störungen auf VDDB nicht auf das Verhältnis der Sensorspannung VIN zur Eingangsspannung am AD-Wandler aus. Dies führt wiederum dazu, dass bei verbleibenden Schwankungen auf der Versorgungsspannung VDDB zwar die IC-internen Absolutpegel variieren, jedoch keinerlei Schwankungen im Wandlungsergebnis auftreten. Für die neueste SSC-Generation von ZMDI wurde dieses Konzept erweitert. Mittels leistungsarmer Betriebspannungsunterdrückung durch einen geeigneten Spannungsregler ist es mit dem ZSSC3016 möglich, low-power Sensorsysteme in stark gestörten Applikationsumgebungen einsetzen zu können, zum Beispiel in SmartPhones. Der Spannungsregler verringert dabei dynamische Verluste an parasitären Kapazitäten im Signalpfad und ermöglicht einerseits 16-Bit-genaue Systeme bei Betriebsspannungen bis 1,8 V unter gleichzeitiger Ausnutzung eines ratiometrischen Signalpfades. Energieeffizienz durch clevere Spannungsversorgung Der Betrieb bei niedrigen Betriebsspannungen bis hinunter zu 1,8 V bei gleichzeitiger IC-Stromaufnahme von höchstens 1 mA sind Grundansätze, die bei aktuellen SSC-Neuentwicklungen von ZMDI, wie dem ZSSC3016, verfolgt werden. Um darüber hinaus energieeffiziente Sensorapplikationen zu ermöglichen, bieten ZMDI-SSCs verschiedene Operationsmodi, wobei insbesondere der Wake-Up- oder Sleep-Mode den Gesamtenergieverbrauch minimiert. Dabei ist der Schaltkreis in einem QuasiPower-Down-Zustand (Stromaufnahme weniger als 250 nA), aus dem er innerhalb weniger Sekundenbruchteile per BusKommando oder passende Schaltkreis-ID aufgeweckt werden kann, worauf eine komplette Sensormessung durchgeführt wird und der IC unmittelbar wieder in den Ruhezustand zurückkehrt. Je nach Interface-Protokoll kann das Messergebnis auch im Ruhezustand abgerufen werden. Mit dem in Bild 2 realisierten Systemkonzept wird unter Nutzung so genannter Low-Dropout-Regler (LDO) eine weitgehend stabile, sehr niedrige Betriebsspannung (VDDB = 1,7 V) erzeugt. Der gesamte analog-digitale Sensormesspfad wird auf dieser niedrigen Spannung betrieben. Da, nicht zuletzt aufgrund des ratiometrischen Ansatzes, auch das eigentlich Brückensensorelement von VDDB gespeist wird, kann so die Gesamtstromaufnahme des Intelligenten Sensors minimiert werden. Zusätzlich wurde zum Beispiel im ZSSC3016 der LDO so ausgelegt, dass er eine stabil-geringe Versorgungsspannung, VDDB auch unter extremen Bedingungen erzeugen kann, wie sie in mobilen Endge- Auf einen Blick Ratiometrisches Messprinzip Die Trennung der Betriebsspannungs-Domainen für Interface- und Signalverarbeitung ermöglicht einen neuen Grad an Energieeffizienz für hochgenaue intelligente Sensoren. Zur Beseitigung von Störeinflüssen auf der Spannungsversorgung wird das ratiometrische Messprinzip in Sensorinterface- und Sensor-Signal-Conditioning-ICs von ZMDI (beispielsweise ZSSC3016 und ZSSC3017) eingesetzt, um quasi rauschfreie und betriebsspannungsstörfeste Applikationen mit einer Nutzsignalauflösung von effektiven 16 Bit zu ermöglichen. infoDIREKT www.all-electronics.de 595ei0312 www.elektronik-industrie.de 68_ZMDI 595 jj.indd 69 02.03.2012 15:51:12 Analoge-/Mixed-Signal-ICs räten zu finden sind; eine Betriebsspannungs-Störunterdrückung von bis 90 dB ohne die Notwendigkeit zusätzlicher, externer Komponenten steht hier zur Verfügung. Analoge Korrektur ist nur die Hälfte Analoge Leistungsparameter sind für die letztliche Sensormesswertqualität sehr wichtig; doch die digitale Signalkorrekturfähigkeit ist ebenfalls von wesentlicher Bedeutung. Typischerweise besitzen Sensorsysteme eine inhärente Nichtlinearität, welche sich sowohl aus der eigentlichen Messgröße ergibt (zum Beispiel Höhenluftdruck, hydrodynamischer Druck und Torsionsschwingung) als auch aus der Sensor-Charakteristik selbst. Zusätzlich besteht nicht nur bei resistiven Sensoren häufig ein nichtlinearer Zusammenhang zwischen Sensorsignal und Umgebungs- bzw. Sensorsystemtemperatur. Um daraus resultierende Messwertverläufe zu linearisieren und dadurch für die nachfolgende Auswertung optimal nutzbar zu machen, beinhaltet der ZSSC3016 beispielsweise eine speziell angepasste, digitale Verarbeitungseinheit, welche bis zu 7 verschiedene 18 Bit genaue Kalibrierkoeffizienten berücksichtigen kann. Die entsprechend notwendigen Kalibrierpunkte sind für jedes Sensor-IC-Paar spezifisch und müssen jeweils separat, in der Regel während der Inbetriebnahme des Sensorsystems, ermittelt werden. Dazu unterstützten die ZMDI-SSCs derartige Korrekturmethoden durch zusätzlich integrierte Temperatursensoren, die Bild 3: Typische Operationsmodi von ZMDI: Sensorinterface- und SensorSignal-Conditioning-ICs. 70 elektronik industrie 03/2012 68_ZMDI 595 jj.indd 70 Alle Bilder: ZMDI Bild 1, oben: Basisschaltung ratiometrisches Messen. Bild 2, rechts: Trennung von Interface und Ratiometrischer Topologie für energieeffiziente, resistive Brückensensor-Signalmessung (zum Beispiel im ZSSC3016 von ZMDI). wie im ZSSC3016 mit einer rauschfreien Auflösung von unter 0,005 K/LSB im Bereich -40...+85 °C eine eigene Klasse für sich bilden könnten. Darüber hinaus können schaltkreisinterne Signaloffsets, Voff über eine so genannte Auto-Zero-Messung (AZ) bestimmt und letztlich das eigentlich gewünschte Sensorsignal damit korrigiert werden. Dafür wird direkt am IC-Eingang der Signalpfad kurzgeschlossen. Zusätzlich zur Signalkorrektur ermöglicht die AZ-Messung die inhärente Applikations-Diagnose zur Überwachung von zum Beispiel Systemstabilität und Driftverhalten. Mit diesen Methodiken lassen sich nichtlineare und temperaturabhängige Messgrößen und Sensorsignale optimal für die eigentliche, auf die Messwertermittlung folgende Informationsverarbeitung vorbereiten. Standard-Features Bestehende und zukünftige Sensorinterface- und SSC-Schaltkreise von ZMDI bieten neben den erläuterten Eigenschaften unter anderem industriestandard-konforme und inhaltsflexible Digitalschnittstellen, wie I2C (bis 3,4 MHz) oder SPI (bis 20 MHz). Als Basis-IP für den ADC wird eine in Auflösung und Segmentierung programmierbare Charge-Balancing-Architektur eingesetzt. Hier kann zwischen reiner MSB-Wandlung (Most Significant Bit) und kombinierter MSB/LSB-Wandlung (LSB, Least Significant Bit) gewählt werden, wobei ein anwendungsspezifisches Optimum zwischen Wandlungsgeschwindigkeit und weiterer Rauschreduktion des Messergebnisses einstellbar ist. Komplett SSC-korrigierte, 16-Bit-aufgelöste Wandlungsergebnisse können mit einer Rate von bis zu 175 s-1 erzeugt werden. Mittels feinstufig programmierbarer, analoger Vorverstärkung und anpassbarer ADC-EingangsoffsetVerschiebung lassen sich ICs, der ZSSC31016 und andere auf verschiedenste Signalverläufe von Umgebungssignal sowie Sensorelementcharakteristiken (insbesonders Offset, Empfindlichkeit und Messbereich) und somit für nahezu jede Messaufgabe anpassen. Letztlich bietet ZMDI dem Markt für Standard-ICs mit seinen 16-Bit-Schaltkreisen die Möglichkeit, größenoptimierte und energieeffiziente, intelligente Sensoren mit Leistungsparametern zu realisieren, die bisher nur von ASIC-basierten oder Einzelchiplösungen bekannt waren. (jj) n Der Autor: Dr. Marko Mailand ist Projektmanager für MixedSignal-IC-Entwicklung im Bereich Medical, Consumer und Industrial bei ZMDI in Dresden. www.elektronik-industrie.de 02.03.2012 15:51:15 IO-Link – Universal, Smart and Easy ByDaniel Heinig (ZMDI) August, 2012 in ENGINEERLIVE IO-Link – Universal, Smart and Easy The IO-Link interface provides an “intelligent” method for closing the “last meter” in the IO (inputoutput) field level of factory automation and reduces costs as well as staff-hours for engineering, installation and maintenance. In process and factory automation, tremendous progress has been made in the last decades, as can be seen when comparing today’s sensors and actuators with those from the early days of automation. The original idea was to use electromagnetic, hydraulic or pneumatic devices to automate repetitive processes. Then came freely programmable logic controllers (PLCs), more electronic advances and the evolution of intelligent interfaces, resulting in development of a huge number of highly integrated and powerful sensors and actuators. Today, simple binary switches have evolved into intelligent communicative sensors. In this context, “intelligent” describes sensor or actuator devices that have, on the one hand, the ability to recognize and report defined conditions, and on the other hand, the capability to be diagnosed during error conditions and configured in the field. However, these bidirectionally communicating devices need simple interfaces to communicate with the PLCs. Moreover, communication for calibrating the sensor/actuator devices is needed in most cases. In the past, many device manufacturers developed their own propriety communication solutions for calibration. This “last meter” gap in factory automation can be closed with a smart interface based on the IO-Link specification, which is defined by the IO-Link Consortium. IO-Link provides a simple and easy-to-use interface for intelligent sensor or actuator devices, as well as for more simple analog and digital sensors and actuators. They are connected via a master on a field bus to a PLC or a parameter server. Here the IO-Link serves not as a bus system, but as a point-to-point connection with the objective of ensuring downward compatibility and integration into all bus systems in factory and process automation. That means standardized M12, M8 and M5 connectors with three-wire cables up to 20 meters in length can be used. IO-Link uses the IEC 61131-2 standardized 24V DC signal. IO-Link is an international standard, which means it is likely to supersede most proprietary solutions in the future. In addition to the benefits in the actual application area within a fabric, a positive impact is that there will be a uniform “sensor language” at locally dispersed manufacturing locations. IO-Link communication between master and device uses a signal that can be processed with a standard UART (today’s standard for many microcontrollers). Because IO-Link is a point-to-point connection, communication via the IO-Link telegram is much easier compared to bus communication. Communication conflicts and the long cycle times needed to recover from conflicts do not occur with IO-Link. IO-Link offers three communication rates: COM1, COM2 and COM3. The COM1 data rate is 4.8 kBaud. COM 2 has a data rate of 38.4 kBaud, which is the most common speed, and the COM3 rate is 230.4 kBaud. Benefits with IO-Link With IO-Link, a world standard is already in place. It is system and field-bus independent and can be integrated into all types of sensors and actuators. The installation of IO-Link devices is cost-neutral. Traditional (three-wire) cables, including typical connection methods, can be used. Using IO-Link, devices can be parameterized during operation. Central data from a parameter storage server enable immediate parameterization. Complex local programming can be a thing of the past, which is especially advantageous for very small devices with difficult access. With IO-Link, the down times for programming are significantly reduced (up to 90%) and the quality of the production equipment is much higher. IO-Link also offers a wide range of diagnostics for the sensor or actuator device itself. For example, pollution, abrasion, temperature, pressure and voltage levels can be monitored and remote maintenance can be performed very easily. Previously for common devices, this was only possible with proprietary solutions and it typically required significant additional cabling work. With IO-Link, down times caused by preventive maintenance or sudden breakdown of the equipment can be reduced by 80% and problems can be detected much faster. Miniaturization with IO-Link Within recent years, a trend of smaller yet more powerful sensors and actuators can be seen in process and factory automation. With IO-Link technology, it is easy to miniaturize products based on these new devices using universally standardized and “intelligent” methods. When using common proprietary solutions, especially those with high requirements for field bus integrity, to design sensors with bi-directional communication and other “intelligent” features, significantly more printed circuit board space is typically required and costs can be considerably higher. The first IO-Link devices were assembled primarily using discreet components. Today highly integrated microchips (cable driver ICs and microcontrollers) in very small packages of 3x5mm or 4x4mm or in wafer-level chip-scale package solutions (WL-CSP; see Fig. 1), with dimensions as small as 2.5x2.5mm, enable powerful and cost-saving integration of IO-Link into the smallest intelligent sensors and actuators. IC product families with the same pin count and size but different functionality can support effective and easy platform designs for IO-Link applications. Fig.1: IO-Link PHY IC as WL-CSP The integration of IO-Link is relative easy, as demonstrated by the example of a block schematic for an IO-Link sensor in Fig. 2. The IO-Link chip manufacturer and software provider very often support the integration as well. Fig. 2: Example block diagram for an IO-Link sensor The standardized IO-Link interface enables the first production of intelligent, cost-saving and fieldbus-independent sensors and actuators at the lowest field level. It completes the “last meter” between the field bus and sensors/actuators, enabling direct bi-directional communication between the control station and the sensor or actuator device. High-Precision Smart Sensors Via Innovative Signal Conditioning ICs By Dr Marko Mailand (ZMDI) November, 2012 in Technology First Waveform Driven Plasticity in BiFeO3 Memristive Devices: Model and Implementation ByChristian Mayr, Paul Staerke, Johannes Partzsch, Rene Schueffny1; Love Cederstroem2; Yao Shuai3; Nan Du, Heidemarie Schmidt4 2012 in Advances in Neural Information Processing Systems 25 (NIPS 2012) Publisher: 2014 Neural Information Processing Systems Foundation, Inc. Waveform Driven Plasticity in BiFeO3 Memristive Devices: Model and Implementation Christian Mayr, Paul Staerke, Johannes Partzsch, Rene Schueffny Institute of Circuits and Systems TU Dresden, Dresden, Germany {christian.mayr,johannes.partzsch,rene.schueffny}@tu-dresden.de Love Cederstroem Zentrum Mikroelektronik Dresden AG Dresden, Germany [email protected] Yao Shuai Inst. of Ion Beam Physics and Materials Res. Helmholtz-Zentrum Dresden-Rossendorf e.V. Dresden, Germany [email protected] Nan Du, Heidemarie Schmidt Professur Materialsysteme der Nanoelektronik TU Chemnitz, Chemnitz, Germany [email protected],[email protected] Abstract Memristive devices have recently been proposed as efficient implementations of plastic synapses in neuromorphic systems. The plasticity in these memristive devices, i.e. their resistance change, is defined by the applied waveforms. This behavior resembles biological synapses, whose plasticity is also triggered by mechanisms that are determined by local waveforms. However, learning in memristive devices has so far been approached mostly on a pragmatic technological level. The focus seems to be on finding any waveform that achieves spike-timing-dependent plasticity (STDP), without regard to the biological veracity of said waveforms or to further important forms of plasticity. Bridging this gap, we make use of a plasticity model driven by neuron waveforms that explains a large number of experimental observations and adapt it to the characteristics of the recently introduced BiFeO3 memristive material. Based on this approach, we show STDP for the first time for this material, with learning window replication superior to previous memristor-based STDP implementations. We also demonstrate in measurements that it is possible to overlay short and long term plasticity at a memristive device in the form of the well-known triplet plasticity. To the best of our knowledge, this is the first implementations of triplet plasticity on any physical memristive device. 1 Introduction Neuromorphic systems try to replicate cognitive processing functions in integrated circuits. Their complexity/size is largely determined by the synapse implementation, as synapses are significantly more numerous than neurons [1]. With the recent push towards larger neuromorphic systems and higher integration density of these systems, this has resulted in novel approaches especially for the synapse realization. Proposed solutions on the one hand employ nanoscale devices in conjuction with conventional circuits [1] and on the other hand try to integrate as much synaptic functionality (short- and long term plasticity, pulse shaping, etc) in as small a number of devices as possible. In 1 Institute of Circuits and Systems TU Dresden, Dresden, Germany Zentrum Mikroelektronik Dresden AG, Dresden, Germany 3 Inst. of Ion Beam Physics and Materials Res., Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden, Germany 4 Professur Materialsysteme der Nanoelektronik, TU Chemnitz, Chemnitz, Germany 1 2 this context, memristive devices 1 as introduced by L. Chua [2] have recently been proposed as efficient implementations of plastic synapses in neuromorphic systems. Memristive devices offer the possibility of having the actual learning mechanism, synaptic weight storage and synaptic weight effect (i.e. amplification of the presynaptic current) all in one device, compared to the distributed mechanisms in conventional circuit implementations [3]. Moreover, a high-density passive array on top of a conventional semiconductor chip is possible [1]. The plasticity in these memristors, i.e. their resistance change, is defined by the applied waveforms [4], which are fed into the rows and columns of the memristive array by CMOS pre- and postsynaptic neurons [1]. This resembles biological synapses, whose plasticity is also triggered by mechanisms that are determined by local waveforms [5, 6]. However, learning in memristors has so far been approached mostly on a pragmatic technological level. The goal seems to be to find any waveform that achieves spiketiming-dependent plasticity (STDP) [4], without regard to the biological veracity of said waveforms or to further important forms of plasticity [7]. Bridging this gap, we make use of a plasticity rule introduced by Mayr and Partzsch [6] which is driven in a biologically realistic way by neuron waveforms and which explains a large number of experimental observations. We adapt it to a model of the recently introduced BiFeO3 memristive material [8]. Measurement results of the modified plasticity rule implemented on a sample device are given, exhbiting configurable STDP behaviour and pulse triplet [7] reproduction. 2 Materials and Methods 2.1 Local Correlation Plasticity (LCP) The LCP rule as introduced by Mayr and Partzsch [6] combines two local waveforms, the synaptic conductance g(t) and the membrane potential u(t). Presynaptic activity is encoded in g(t), which determines the conductance change due to presynaptic spiking. Postsynaptic activity in turn is signaled to the synapse by u(t). The LCP rule combines both in a formulation for the change of the synaptic weight w that is similar to the well-known Bienenstock-Cooper-Munroe rule [9]: dw = B · g(t) · (u(t) − Θu ) (1) dt In this equation, Θu denotes the voltage threshold between weight potentiation and depression, which is normally set to the resting potential. Please note that coincident pre- and postsynaptic activities are detected in this rule by multiplication: A weight change only occurs if both presynaptic conductance is elevated and postsynaptic membrane potential is away from rest. The waveforms for g(t) and u(t) are determined by the employed neuron model. Mayr et al. [6] use a spike response model [10], with waveforms triggered at times of pre- and postsynaptic spikes: g(t) = Ĝ · e − t−tpre n τpre u(t) = Up,n · δ(t − tpost n ) + Urefr · e − pre for tpre n ≤ t < tn+1 , t−tpost n τpost for tpost ≤t< n tpost n+1 (2) , (3) where and denote the n-th pre- and postsynaptic spike, respectively. The presynaptic conductance waveform is an exponential with height Ĝ and decay time constant τpre . The postsynaptic potential at a spike is defined by a Dirac pulse with integral Up,n , followed by an exponential decay with height Urefr (< 0) and membrane time constant τpost . tpre n tpost n Following [6], postsynaptic adaptation is realised in the value of Up,n . For this, Up,n is decreased from a nominal value Up if the postsynaptic pulse occurs shortly after another postsynaptic pulse: − Up,n = Up · (1 − e post −tn−1 tpost n τpost ) (4) The time constant for the exponential decay in this equation is the same as the membrane time constant. 1 In 1971 Leon Chua postulated the existence of a device where the current or voltage is directly controlled by voltage flux or charge respectively, this was called a memristor. Using a general state space description Chua and Kang later extended the theory to cover the very broad class of memristive devices [2]. Even though the two terms are used interchangeably in other studies, since the devices used in this study do not fit the strict definition of memristor, we will refer to them as memristive devices in the following. 2 ∆w in % u in mV g in nS 1.0 0.8 0.6 0.4 0.2 0.0 15 10 5 0 −5 −10 2.0 1.5 1.0 0.5 0.0 0 20 40 60 t in ms 80 100 120 Figure 1: Progression of the conductance g, the membrane potential u and the synapse weight w for a sample spike pattern. Figure 1 shows the pre- and postsynaptic waveforms, as well as the synaptic weight for a sample spike train. For the simple waveforms, two principal weight change mechanisms are present: If the presynaptic side is active at a postsynaptic spike, the weight is instantaneously increased by the large elevation of the membrane potential. In contrast, all presynaptic activity falling into the refractoriness period of the neuron (exponential decay after spike) integrates as a weight decrease. As shown in [6], this simple model can replicate a multitude of experimental evidence, on par with the most advanced (and complex) phenomenological plasticity models currently available. In addition, the LCP rule directly links synaptic plasticity to other pre- and postsynaptic adaptation processes by their influence on the local waveforms. This can be used to explain further experimental results [6]. In Sec. 3.1, we will adapt the above rule equations to the characteristics of our memristive device, which is introduced in the next section. 2.2 Memristive Device Non-volatile passive analog memory has often been discussed for applications in neuromorphic systems because of the space limitations of analog circuitry. However, until recently only a few groups had access to sufficient materials and devices. Developments in the field of nano material science, especially in the last decade, opened new possibilities for creating compact circuit elements with unique properties. Most notably after HP released information about their so-called Memristor [11] much effort has been put in the analysis of thin film semiconductor-metal-metaloxide compounds. One of the commonly used materials in this class is BiFeO3 (BFO). The complete conducting mechanisms in BFO are not fully understood yet, with partly contradictory results reported in literature, but it has been confirmed that different physical effects are overlayed and dominate in different states. Particularly the resistive switching effect seems promising for neuromorphic devices and will be discussed in more detail. It has been shown in [12, 8] that the effect can appear uni- or bipolar and is highly dependent on the processing regarding the substrate, growth method, doping, etc. [13]. We use BFO grown by pulsed laser deposition on Pt/Ti/SiO2 /Si substrate with an Au top contact, see in Fig. 2. Memristors were fabricated with circular top plates, which were contacted with needle probes, whereas the continuous bottom plate was contacted at one edge of the die. The BFO films have a thickness of some 100nm. The created devices show a unipolar resistive switching with a rectifying behavior. For a positive bias the device goes into a low resistive state (LRS) and stays there until a negative bias is applied which resets it back to a high resistive state (HRS). The state can be measured without influencing it by applying a low voltage of under 2V. Figure 3 shows a voltage-current-diagram which indicates some of the characteristics of the device. The measurement consists of three parts: 1) A rising negative voltage is applied which resets the device from an intermediate level to HRS. 2) A rising voltage lowers the resistance exponentially. 3 Figure 2: Photograph of the fabricated memristive material that was used for the measurements. 3) A falling positive voltage does not affect the resistance anymore and the relation is nearly ohmic. Because of the rectifying characteristic the current in LRS and HRS for negative voltages does not exhibit as large a dynamic range as for positive voltages. 800 10−3 700 10−4 600 10−5 abs(I) in A Im in uA 500 400 300 200 10−6 10−7 100 10−8 0 −100 −6 −4 −2 0 Vm in V 2 4 6 10−9−6 −4 −2 0 Vm in V 2 4 6 Figure 3: Voltage-current diagram of the device as linear and log-scale plot 2.3 Phenomenological Device Model To apply the LCP model to the BFO device and enable circuit design, a simplified device model is required. We have based our model on the framework of Chua and Kang [2]; that is, using an output function (i.e., for current Im ) dependent on time, state and input (i.e., voltage Vm ). Recently, this has been widely used for the modeling of memristive devices [11, 14, 15]. In contrast to many memristive device models which are based on a sinh function for the output relationship (following Yang et al. [14]), we model the BFO device as two semiconductor junctions. The junctions can abstractly be described by a diode equation: Id = I0 (exp(qV /kT ) − 1) [16]. In an attempt to catch the basic characteristics, our device could be modeled employing two diode equations letting a state variable, x, influence the output and roughly represent the conductance: ( ) (5) Im = h(x, Vm , t) = I01 · (ed1 ·Vm (t) − 1) − I02 · (e−d2 ·Vm (t) − 1) · x(t) where Vm is the voltage over the device2 and the diode like equations guarantee a zero crossing hysteresis. The use of parameters I0i and di now allows individual control of current characteristics for negative and positive voltages, and as shown in the previous section these are rather asymmetric for our BFO devices. For the purpose of modeling plasticity, our focus has been on the dynamic behavior of the conductance change; this was investigated in some detail by Querlioz et al. [15] and has served as the basis for our model of the state variable: dx = f (x, Vm , t) = Γ(x) · Ψ(Vm ) dt 2 (6a) With sinh(z) = 1/2 · (ez − e−z ), our approach is not fundamentally different from using a sinh function. 4 In the above the functions Γ(x) and Ψ(Vm ) relate to how the current state affects the state development and the effect of the applied voltage, respectively. Γ(x) is described by an exponential function. x−Gmin e−β1 Gmax −Gmin , Vm (t) > 0, Gmax −x −β (6b) Γ(x) = e 2 Gmax −Gmin , Vm (t) ≤ 0, x > Gmin , 0, else In Ψ(Vm ) we again favor using separate exponential over sinh functions for increased controllability of the different voltage domains (positive and negative). Here the parameters φ1 and φ2 govern the voltage dependence of the state modification, with α1 and α2 scaling the result. With β1 and β2 , the speed of state saturation is set: ( ) { α1 · (eφ1 Vm − 1 ,) Vm (t) ≥ 0, Ψ(Vm ) = (6c) α2 · 1 − e−φ2 Vm , Vm (t) < 0, For implementation, we have used one of the most prominent commercially available simulators R R Spectre⃝ . Using befor custom analog and mixed-signal integrated circuit design, the Cadence⃝ havioral current sources, the equations for h(x, Vm , t) and f (x, Vm , t) can be implemented and simulated with feasibility for circuit design. Depicted in Fig. 4 are the conductance change over time, at different voltages, for model (Fig. 4a) and measurements (Fig. 4b). It can be seen how the exponential dependency on device voltage gives rise to different levels of operation (Equations (5) and (6c)). Also the saturation of conductance change for a given voltage is visible (Equation (6b)). The sharp changes of current seen in the model are a result of our simplistic approach, whereas the real devices show slower transitions. In addition, it can be noted that above 5 V the real device appears to experience a significantly steeper rise in current. However, the target is to have reasonable characteristics in the region of operation below 5 V which is relevant in our plasticity rule experiments. 6 5 Im in mA 1.2 1.0 4 0.8 3 0.6 0.4 20 30 t in s 40 50 60 6 5 1.0 4 0.8 3 0.6 0.4 2 0.2 1 10 Im (t) Vm (t) 1.2 2 0.2 0.0 0 1.4 0.0 0 70 (a) Vm in Volt 1.6 Im (t) Vm (t) Im in mA 1.4 Vm in Volt 1.6 1 10 20 30 t in s 40 50 60 70 (b) Figure 4: Device current for different applied voltages for model (a) and measurement (b). 3 3.1 Results Modified LCP A nonlinearity or learning threshold is required in order to carry out the correlation operation between pre- and postsynaptic waveforms that characterizes various forms of long term learning [9, 17]. In the original LCP rule, this is done by the multiplication of pre- and postsynaptic waveforms, i.e. only coincident activity results in learning. Memristive devices are usually operated in an additive manner, i.e. the pre- and postsynaptic waveforms are applied to both terminals of the device, thus adding/subtracting their voltage curves. In order for the state of the memristive device to only be affected by an overlap of both waveforms, a positive and negative modification threshold is required [4]. As can be seen from equation 6c, the internal voltage driven state change Ψ(Vm ) is affected by two different parameters φ1 and φ2 which govern the thresholds for negative and positive voltages. For our devices, these work out to effective modification thresholds of -2V and 5 Vpre in V Vpost in V −2 −1 0 1 2 Vpre-Vpost in V 4 2 0 −2 ∆Im in % 2 1 0 −1 −2 50 40 30 20 10 0 0 20 40 60 t in ms 80 100 120 Figure 5: Modification of the original LCP rule for the BFO memristive device, from top to bottom: pre- and postsynaptic voltages/waveforms, exponential decay with τpre resp. τpost (postsynaptic waveform plotted as inverse to illustrate waveform function); resultant voltage difference across memristive device and corresponding memristance modification thresholds (horizontal grey lines); and memristance change as computed from the model of sec. 2.3 +2.3V. Thus, we need waveforms where coincident activity causes a voltage rise above the positive threshold resp. a voltage drop below the negative threshold. In addition, we need a dependence between voltage level and weight change, as the simplest method to differentiate between weights is the voltage saturation characteristic in Fig. 3. That is, a single stimulus (e.g. pulse pairing in STDP) should result in a distinctive memristive programming voltage, driving the memristive device into the corresponding voltage saturation level via the (for typical experiments) 60 stimulus repetitions. Apart from quantitative adjustments to the original LCP rule, this requires one qualitative adjustment. The presynaptic conductance waveform is now taken as a voltage trace and a short rectangular pulse is added immediately before the exponential downward trace, arriving at a waveform similar to the spike response model for the postsynaptic trace, see uppermost curve in Fig. 5. We call this the modified LCP rule. For overlapping pre- and postsynaptic waveforms, the rectangular pulses of both waveforms ’ride up’ on the exponential slopes of their counterparts when looking at the voltage difference Vm = Vpre − Vpost across the memristive device for pre- and postsynaptic waveforms applied to both terminals of the device (see third curve from top in Fig. 5). Since the rectangular pulses are short compared to the exponential waveforms, they represent a constant voltage whose amplitude depends on the time difference between both waveforms (as expressed by the exponential slopes) as required above. Thus, as in the original LCP rule, the exponential slopes of pre- and postsynaptic neuron govern the STDP time windows. Repeated application of such a pre-post pairing drives the memristive device in its corresponding voltage-dependent saturation level. Similar to the original LCP rule, short term plasticity of the postsynaptic action potentials can now be added to make the model more biologically realistic (e.g. with respect to the triplet learning protocol [6]). We employ the same attenuation function as in equation 4, adjusting the duration of the postsynaptic action potential, see second curve from top in Fig. 5. Please note: One further important advantage of using this modified LCP rule is that both preand postsynaptic waveform are causal, i.e. they start only at the pre- respectively postsynaptic pulse. This is in contrast to most currently proposed waveforms for memristive learning, i.e. these waveforms have to start well in advance of the actual pulse [4], which requires preknowledge of a pulse occurrence. Especially in an unsupervised learning context with self-driven neuron spiking, this preknowledge is simply not existent. 6 120 100 80 60 60 40 40 20 0 20 0 −20 −20 −40 −40 −60 −80 −200 τpre=15ms, τpost=35ms τpre=30ms, τpost=50ms 100 ∆Im in % ∆Im in % 80 120 τpre=15ms, τpost=35ms τpre=30ms, τpost=50ms −60 −150 −100 −50 0 ∆t in ms 50 100 150 −80 −200 200 −150 −100 −50 (a) 0 ∆t in ms 50 100 150 200 (b) Figure 6: Results for STDP protocol: (a) model simulation, (b) measurement with BFO memristive device. 3.2 Measurement results The waveforms developed in the previous section can be tested in actual protocols for synaptic plasticity. As a first step, we investigate the behaviour of the BFO memristive device in a standard pair-based STDP experiment. For this, we apply 60 spike pairings of different relative timings at a low repetition frequency (4Hz), comparable to biological measurement protocols [17]. Measurements were performed with a BFO memristive device as shown in Fig. 2. As shown in the model simulations in Fig. 6a, the developed waveforms are transformed by the memristive device into approx. exponentially decaying conductance changes. This is in good agreement with biological measurements [17] and common STDP models [7]. The model results are confirmed in measurements for the BFO memristive device, as shown in Fig. 6b. Notably, the measurements result in smooth, continuous curves. This is an expression of the continuous resistance change in the BFO material, which results in a large number of stable resistance levels. This is in contrast e.g. to memristive materials that rely on ferroelectric switching, which exhibit a limited number of discrete resistance levels [18, 1]. Moreover, the nonlinear behaviour of the BFO memristive device has only limited effect on the resulting STDP learning window. The resistance change is directly linked to the applied waveforms. For example, as shown in Fig. 6, an increase in time constants results in correspondingly longer STDP time windows. Following our modeling approach, these time constants are directly linked to the time constants of the underlying neuron and synapse model. 60 30 45 30 10 15 0 0 −10 −20 −15 −30 −30 −30 (a) −20 0 −10 10 ∆t2 t in ms 20 ∆ Im in % ∆t1 in ms 20 30 (b) Figure 7: Measurement results for the triplet protocol of Froemke and Dan [7]. (a) biological measurement data, adapted from [7], (b) measurement with BFO memristive device. 7 Experiments have shown that weight changes of single spike pairings, as expressed by STDP, are nonlinearly integrated when occuring shortly after one another. Commonly, triplets of spikes are used to investigate this effect, as carried out by [7]. The main deviation of these experimental results compared to a pure STDP rule occur for the post-pre-post triplet [6], which can be attributed to postsynaptic adaptation [7]. With this adaptation included in our waveforms (equation 4, as seen in the action potential duration in the second curve from the top of Fig. 5), the BFO memristive device measurements well resemble the post-pre-post results of [7]. The measurement results in Fig. 7b show more depression than the biological data for the pre-post-pre triplet (upper left quadrant). This is because changes in resistance need some time to build up after a stimulating pulse. In the pre-post-pre case, the weight increase has not fully developed when it is overwritten by the second presynaptic pulse, which results in weight decrease. This effect is dependent on the measured device and the parameters of the stimulation waveforms (cf. Supplementary Material). For keeping the stimulation waveforms as simple as possible, only postsynaptic adaptation has been included. However, it has been shown that presynaptic short-term plasticity also has a strong influence on long-term learning [19, 6]. With our modeling approach, a model of short-term plasticity can be easily connected to the stimulation waveforms by modulating the length of the presynaptic pulse. Along the same lines, the postsynaptic waveform can be shifted by a slowly changing voltage analogous to the original LCP rule (cf. Eq. 1) to introduce a metaplastic regulation of weight potentiation and depression [6]. Together, these extensions open up an avenue for the seamless integration of different forms of plasticity in learning memristive devices. 3.3 Conclusion Starting from a waveform-based general plasticity rule and a model of the memristive device, we have shown a direct way to go from these premises to biologically realistic learning in a BiFeO3 memristive device. Employing the LCP rule for memristive learning has several advantages. As a memristor is a two-terminal device, the separation of the learning in two waveforms in the LCP rule lends itself naturally to employing it in a passive array of memristors [1, 4]. In addition, this waveform-defined plasticity behaviour enables easy control of the STDP time windows, which is further aided by the excellent multi-level memristive programming capability of the BiFeO3 memristive devices. There is only a very small number of memristors where plasticity has been shown at actual devices at all [18, 1]. Among those, our highly-configurable, finely grained learning curves are unique, other implementations exhibit statistical variations [1], can only assume a few discrete levels [18] or the learning windows are device-inherent, i.e. cannot be adjusted [20]. This comes at the price that in contrast to e.g. phase-change materials, BiFeO3 is not easily integrated on top of CMOS [8]. The waveform-defined plasticity of the LCP rule enables the explicit inclusion of short term plasticity in long term memristive learning, as shown for the triplet protocol. As the pre- and postsynaptic waveforms are generated in the CMOS neuron circuits below the memristive array [1], short term plasticity can thus be added at little extra overall circuit cost and without modification of the memristive array itself. In contrast to our easily controlled short term plasticity, the only previous work targeting memristive short term plasticity employed intrinsic (i.e. non-controllable) device properties [20]. To the best of our knowledge, this is the first time triplets or other higher-order forms of plasticity have been shown for a physical memristive device. In a wider neuroscience context, waveform defined plasticity as shown here could be seen as a general computational principle, i.e. synapses are not likely to measure time differences as in naive forms of STDP rules, they are more likely to react to local static [21] and dynamic [5] state variables. Some interesting predictions could be derived from that, e.g. STDP time constants that are linked to synaptic conductance changes or to the membrane time constant [22, 6]. These predictions could be easily verified experimentally. Acknowledgments The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007- 2013) under grant agreement no. 269459 (Coronet). 8 References [1] S. H. Jo, T. Chang, I. Ebong, B. B. Bhadviya, P. Mazumder, and W. Lu, “Nanoscale memristor device as synapse in neuromorphic systems,” Nano Letters, vol. 10, no. 4, pp. 1297–1301, 2010. [2] L. Chua and S. M. Kang, “Memristive devices and systems,” Proceedings of the IEEE, vol. 64, no. 2, pp. 209 – 223, feb. 1976. [3] S. Fusi, M. Annunziato, D. Badoni, A. Salamon, and D. Amit, “Spike-driven synaptic plasticity: Theory, simulation, VLSI implementation,” Neural Computation, vol. 12, pp. 2227–2258, 2000. [4] M. Laiho, E. Lehtonen, A. Russel, and P. Dudek, “Memristive synapses are becoming reality,” The Neuromorphic Engineer, November 2010. [Online]. Available: http://www.inenews.org/view.php?source=003396-2010-11-26 [5] S. Dudek and M. Bear, “Homosynaptic long-term depression in area CAl of hippocampus and effects of N-methyl-D-aspartate receptor blockade,” PNAS, vol. 89, pp. 4363–4367, 1992. [6] C. Mayr and J. Partzsch, “Rate and pulse based plasticity governed by local synaptic state variables,” Frontiers in Synaptic Neuroscience, vol. 2, pp. 1–28, 2010. [7] R. Froemke and Y. Dan, “Spike-timing-dependent synaptic modification induced by natural spike trains,” Nature, vol. 416, pp. 433–438, 2002. [8] Y. Shuai, S. Zhou, D. Burger, M. Helm, and H. Schmidt, “Nonvolatile bipolar resistive switching in au/bifeo[sub 3]/pt,” Journal of Applied Physics, vol. 109, no. 12, p. 124117, 2011. [Online]. Available: http://link.aip.org/link/?JAP/109/124117/1 [9] E. Bienenstock, L. Cooper, and P. Munro, “Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex,” Journal of Neuroscience, vol. 2, pp. 32–48, 1982. [10] W. Gerstner and W. Kistler, spiking neuron models: single neurons, populations, plasticity. University Press, 2002. Cambridge [11] D. B. Strukov, G. S. Snider, D. R. Stewart, and R. S. Williams, “The missing memristor found,” Nature, vol. 453, no. 7191, pp. 80–83, May 2008. [Online]. Available: http://dx.doi.org/10.1038/nature06932 [12] C. Wang, K. juan Jin, Z. tang Xu, L. Wang, C. Ge, H. bin Lu, H. zhong Guo, M. He, and G. zhen Yang, “Switchable diode effect and ferroelectric resistive switching in epitaxial bifeo[sub 3] thin films,” Applied Physics Letters, vol. 98, no. 19, p. 192901, 2011. [13] Y. Shuai, S. Zhou, C. Wu, W. Zhang, D. Bürger, S. Slesazeck, T. Mikolajick, M. Helm, and H. Schmidt, “Control of rectifying and resistive switching behavior in bifeo3 thin films,” Applied Physics Express, vol. 4, no. 9, p. 095802, 2011. [Online]. Available: http://apex.jsap.jp/link?APEX/4/095802/ [14] J. J. AU Yang, M. D. Pickett, X. Li, O. A. A., D. R. Stewart, and R. S. Williams, “Memristive switching mechanism for metal//oxide//metal nanodevices,” Nature Nanotechnology, pp. 429,430,431,432,433, July 2008. [Online]. Available: http://dx.doi.org/10.1038/nnano.2008.160 [15] D. Querlioz, P. Dollfus, O. Bichler, and C. Gamrat, “Learning with memristive devices: How should we model their behavior?” in Nanoscale Architectures (NANOARCH), 2011 IEEE/ACM International Symposium on, june 2011, pp. 150 –156. [16] B. G. Streetman and S. K. Banerjee, Solid State Electronic Devices. Pearson Prentice Hall, 2006. [17] G.-Q. Bi and M.-M. Poo, “Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type,” Journal of Neuroscience, vol. 18, no. 24, pp. 10 464–10 472, 1998. [18] F. Alibart, S. Pleutin, O. Bichler, C. Gamrat, T. Serrano-Gotarredona, B. Linares-Barranco, and D. Vuillaume, “A memristive nanoparticle/organic hybrid synapstor for neuroinspired computing,” Advanced Functional Materials, vol. 22, no. 3, pp. 609–616, 2012. [Online]. Available: http://dx.doi.org/10.1002/adfm.201101935 [19] R. Froemke, I. Tsay, M. Raad, J. Long, and Y. Dan, “Contribution of individual spikes in burst-induced long-term synaptic modification,” Journal of Neurophysiology, vol. 95, pp. 1620–1629, 2006. [20] T. Ohno, T. Hasegawa, T. Tsuruoka, K. Terabe, J. Gimzewski, and M. Aono, “Short-term plasticity and long-term potentiation mimicked in single inorganic synapses,” Nature Materials, vol. 10, pp. 591–595, 2011. [21] A. Ngezahayo, M. Schachner, and A. Artola, “Synaptic activity modulates the induction of bidirectional synaptic changes in adult mouse hippocampus,” The Journal of Neuroscience, vol. 20, no. 3, pp. 2451– 2458, 2000. [22] J.-P. Pfister, T. Toyoizumi, D. Barber, and W. Gerstner, “Optimal spike-timing dependent plasticity for precise action potential firing in supervised learning,” Neural Computation, vol. 18, pp. 1309–1339, 2006. 9 Subversion(r): Empirical Design Methodology from the Perspective of Integrated Circuit Design ByRadoslav Prahov, Holger Schmidt and Achim Graupner (ZMDI) 6. April 2013 in Institute of Research Engineers and Doctors Publisher: Institute of Research Engineers and Doctors and SEEK Digital Library Proc. of the Second Intl. Conf. on Advances in Electronics and Electrical Engineering — AEEE 2013 Copyright © Institute of Research Engineers and Doctors. All rights reserved. ISBN: 978-981-07-5939-1 doi:10.3850/ 978-981-07-5939-1_11 Subversion(r): Empirical Design Methodology from the Perspective of Integrated Circuit Design Radoslav Prahov, Holger Schmidt and Achim Graupner Zentrum Mikroelektronik Dresden AG Dresden, Germany {radoslav.prahov, holger.schmidt, achim.graupner}@zmdi.com Although automated support for CMs has existed for over thirty years, its prominence in the framework of IC design has sharply increased during the last decade [4]. Early automated support tools suffered from inadequate functionality and applicability. In contrast, modern tools offer advanced utilities and features [5]-[8]. Despite the evolution from simple tools to comprehensive environments, automated support for the CM is still confronted with challenges due to the advent of new innovations and technologies [9]. One such challenge is the ever-increasing complexity of IC projects [10]-[12]. For instance, because of the more complex verification flows with every new IC generation and process node, IC projects tend to comprise ever-growing design data. Abstract—An aspect of primary significance in integrated circuit (IC) design is configuration management of design data, i.e., the task of keeping a project comprising a multiplicity of revisions well organized. Apache’s Subversion(r) is a software tool that can facilitate this task. It manages revisions of documentation, source code, and a wide variety of files, and it automates storing and retrieving revisions. Unfortunately, Subversion(r) provides insufficient support for IC projects consisting of large numbers of managed items. We address this problem by introducing, discussing, and demonstrating several approaches that improve the performance of Subversion(r) when handling a vast amount of files and directories. Our approaches are division of the working copy into smaller pieces with a decent granularity, conversion of the working copy into a single tarball file, and implementation of a referred central working copy. Each method is incorporated into the configuration management flow through a lifecycle of IC development, which offers the opportunity to compare and validate each technique. As the most prominent CM tool, Apache’s Subversion® (SVN) must respond to the trend toward more complex IC designs with higher file counts. However, it has a deficit when it comes to dealing with great numbers of managed items, as its efficiency proportionally degrades with a growing quantity of files and directories [13]-[15]. In addition to breaking a tool’s environment, this leads to unacceptably long operation times. Even in projects with average complexity, this is a severe issue. In the majority of cases, the increased operation time drags down productivity because the longer waiting time cannot be used effectively. Reducing it not only accelerates the IC design process, but also lowers the stress level for the project’s IC developers. Keywords—configuration management, Subversion, design methodology, performance. I. Introduction With the ongoing requirements for accomplishing higher productivity and quality while ensuring effective control before, throughout, and beyond the integrated circuit (IC) development process, configuration management (CM) of design data has become an important aspect of modern IC projects. Its role is to assist designers by controlling, tracking and coordinating every single change that occurs in the file system during a project lifecycle by gathering evolutionary revisions [1]. This allows users to perform unlimited updates to the project information, but at the same time, they can be assured that each user has the latest version. Users with an old version have the ability to either update their working copy to the most recent version or continue employing their old one and propagate their changes afterwards. Conversely, users with a head version (the version presently designated as most current) can check out a previous version; for example, for comparison if anything regresses. Changes during IC projects could have a diverse character. Beyond basic file modifications, they may also involve adding, removing, or updating directories; modifying the hierarchy; altering group permission and file and directory ownership; and renaming files and directories [2],[3]. The present study was designed to evaluate different approaches that could adapt SVN to handling a vast quantity of files and directories more efficiently. The approaches were incorporated into the CM and were employed during the lifetime of an IC design project. The effect on SVN is compared, and the improvement is defined in this study. The remainder of this paper is organized as follows. In section II, a concise evolutionary history and features of SVN are introduced, related work is presented, and challenges of migrating from one CM tool to another are identified. This is followed by section III, where the problem of controlling a multiplicity of files and directories is addressed, different design methodology techniques that allow SVN performance improvement are presented, and their implications for the integrated circuit workflow architecture and revision control system are discussed. Section IV explains the industrial project into which the design methodologies were 16 Proc. of the Second Intl. Conf. on Advances in Electronics and Electrical Engineering — AEEE 2013 Copyright © Institute of Research Engineers and Doctors. All rights reserved. ISBN: 978-981-07-5939-1 doi:10.3850/ 978-981-07-5939-1_11 parental relations are not naturally represented in the repository tree structure. However, compared to most common distributed version control systems, several merging issues still remain ([18], cf. Chapter 4). incorporated and presents an evaluation and discussion. Section V concludes the paper. II. Background Since the advent of Concurrent Versions System (CVS), initially as a set of shell scripts coded by Dick Grune (1986) and later converted to a C program by Brian Berliner (1989) [16], CVS has become one of the most prominent version control tools, widely employed in various projects. Even now, over two decades later, it is the second most widespread tool in terms of market share (13%) [17]. During this extensive period of development and usage, its benefits were identified; however, substantial problems emerged. Hence in 2000, a group including former CVS developers launched SVN, a version control tool explicitly intended to be a successor of CVS with a similar design and improved functionality [18]. Its first official release arrived in 2004. At present, SVN is the most widespread version control tool, with a market share of 51% [17]. A. B. Problem Statement Despite SVN’s dominant market share and improved functionality, SVN often suffers from a performance deficiency when it handles a multiplicity of managed items. This is not actually recent news, nor an exclusive trait of SVN, since the signs were first observed in ancestral CVS. In 1989, prolonged times were recorded while CVS managed the Prisma™ project by Prisma, Inc., which comprised over 17,000 files [19]. Subsequently the effect of escalated IC project complexity upon the behavior of SVN was assessed in [13]-[15]. How the system can be adapted to a multitude of files with a different origin was presented in [14] by investigation of several typical user cases. SVN performance limitations and suggestions for how they can be overcome were also discussed. When investigating sources of bottlenecks in SVN, the most significant finding was the relationship between the number of managed items and the execution time of SVN operations. Furthermore, the relationship is quadratic for commit commands and linear for add/checkout commands. SVN SVN’s repository core is the storage backend where all versioned data are stored. Each time a client successfully commits certain changes, the SVN repository creates a new snapshot of the versioned file-system tree, called a revision or version. An increasing, unique number globally identifies each version. The snapshot contains the revision directory structure, file meta-data, and file contents, which might be delta compressed to save space. The delta compression keeps only the differences between successive versions of files. To retrieve a specific file revision, SVN composes a sequence of deltas up to the last full version. Because searching through all file revisions is time-consuming, full versions called skipdeltas are inserted between deltas. Taking into account all findings and results of the studies mentioned above, the optimum effectiveness of SVN can be achieved by keeping project data compact and locating the repository and working copy in a RAM disk on a sufficiently powerful machine with an adequately spacious cache area. However, with the ongoing growth of IC project complexity, the reduction of design data is hardly applicable. Even though hosting the repository and working copies in a RAM disk has proven to be the most efficient configuration [14], in our view, it remains a theoretical technique with limited possibility of application because in a considerable proportion of cases, the required allocation of space is substantial. Furthermore, implementing mandatory security measures, such as backup and regular snapshots, is more complex. Another setback is that job distribution techniques, such as using a load-shared facility (LSF), which is widespread, cannot be employed. On the client side, for each file in the working copy, a pristine copy, the revision number (on which the local file is based), and the timestamp of its last update are stored. The pristine copy allows using several commands without any repository interaction: checking file status, comparing files with their unmodified version (svn diff) and restoring contents (svn revert). Committing changes from the client’s working copy to the repository does not trigger a synchronization of other locally unmodified files. Thus, after committing a subset of the working copy, it is left in a mixed-revision state; therefore, the base revision number must be tracked for every file and directory. To reproduce all upstream changes, the svn update command pulls all changes, optionally only up to a specified revision. Local modifications are automatically reintegrated, and the usual conflict-resolution workflow is applied. Regardless of the flaws discussed above, a substitution of the CM tool is often not an alternative and is hardly applicable because of the CM’s tight integration into the design workflow. For instance, in the project for Apache Software Foundation’s OpenOffice™ (for which the repository consisted of over 66,000 files), it was reported that various tools and wrapper scripts, such as issue tracking, authentication, and some tools specific to the project (EIS, LION, etc.), had to be entirely redesigned [20]. Furthermore, programs that were supplementary to the IC development process (such as tools facilitating the design and SoC project management and the GUI support tools) are only compatible with specific CM tools, usually SVN and Perforce™ (trademark of Perforce Software, Inc.). Hence, in this work, we address the SVN bottleneck and propose three approaches that are capable of mitigating the SVN dependence on the amount of managed items. SVN supports branching, merging, and tagging using an additional directory layer in the repository hierarchy; typically, main development happens in the trunk, while development of branches and tagged versions reside in corresponding named directories. Since version 1.5 (2008), merge information is automatically stored in the path meta-data (svn:merginfo). This simplifies merging between branches and the trunk, as 17 Proc. of the Second Intl. Conf. on Advances in Electronics and Electrical Engineering — AEEE 2013 Copyright © Institute of Research Engineers and Doctors. All rights reserved. ISBN: 978-981-07-5939-1 doi:10.3850/ 978-981-07-5939-1_11 Design Methodology UNIXgzip command. Conversely, when the uploaded file has to be accessed, it must also be decompressed. Due to the compressed character of the block, an additional speed boost and shrinkage of the space that is consumed on the server are observed. In the following section, three different techniques are presented as each of them allows reducing the quantity of files and directories that SVN handles per operation. A. Since in [14] both principles were proven to be efficient, not only for binary files, but also for a wide range of file formats, the latter approach was implemented in the IC project as shown in Fig. 4. The tar/compress and untar/decompress steps were entirely automated due to their routine and particularly error prone character. Division of the Working Copy into Smaller Pieces with a Decent Granularity (DM1) The first proposal is based on division of the working copy into smaller pieces with a decent granularity, organized in a block-based hierarchy. In such a structure, each block can be processed individually in parallel. In addition to parallelizing the operations, this allows reducing the number of files to be manipulated (submitted/updated) at once. +| | | | | | | | | | Putting this approach into practice can be achieved by different methods. One method would be to divide the project into as many various unit types as possible. A unit type constitutes a heterogeneous, separate design part; for example, a directory of a circuit block or sub-block. Different unit types are generally limited by the character of the project data. Therefore, a detailed verification under the project hierarchy and design data might be needed for such architecture. Figure 1. Classical structure of analog library We chose to divide the data as shown in Fig. 1 and Fig. 2. Since IC designs possess a decent granularity by nature, the analog library depicted is suitable for dividing into its heterogeneous subdirectories (bandgap, oscillator, and vref). However, the effort of reorganizing existing projects should be taken into consideration. Even so, this is one of the focal advantages of the approach since the method can be employed directly out-of-the-box for a substantial portion of IC designs. B. analog_library +- bandgap +- bandgap_resistors +- bandgap_amplifier +- tb_bandgap +- oscillator_20kHz +- oscillator_trimunit +- oscillator_schmitttrigger +- tb_oscillator +- vref_18 +- … +| | | | +| | | | +| +- Conversion of the Working Copy into a Single Tarball File (DM2) bandgap_library +- bandgap +- bandgap_resistors +- bandgap_amplifier +- tb_bandgap oscillator_20kHz_library +- oscillator_20kHz +- oscillator_trimunit +- oscillator_schmitttrigger +- tb_oscillator vref_18_library +- vref_18 +- … Figure 2. Organization of analog library into smaller pieces with decent granularity. Below level 1, each subfolder could be manipulated in isolation. The basic principle of the approach introduced in [13] and [14] is depicted in Fig. 3. The quantity of managed files and directories is decreased by combining them into a tarball archive, which accelerates SVN. Initially, this principle was only proposed for binary files, for which the number of files can be efficaciously reduced by two mutually complementary means [13]. In the first case, the whole directory structure is converted into one single monolithic block. For that purpose, data that are to be imported into the repository are transformed into one file via a simple tar operation and then the tar file is uploaded to the repository. When the data must be accessed (updated), the tar file is untarred and they are again available. Since the revision control system is facilitated and does not need to recursively deal with the initial directory structure, but rather with just a single block, an acceleration of about 15 times was reported [13]. Files Project Repository Files.tar Project Repository Execution time III. 1x 15x Files.tar.gz Project Repository The second method takes the first case a step further by compressing the single block. The process is essentially the same, except that the tar file is compressed before being uploaded to the repository. This could be done in various ways; albeit, a simple and effective one is the standard 19x Figure 3. Three techniques for importing into the SVN repository: plain file structure, simple tar file, and compressed tar file, with respective performance. 18 Proc. of the Second Intl. Conf. on Advances in Electronics and Electrical Engineering — AEEE 2013 Copyright © Institute of Research Engineers and Doctors. All rights reserved. ISBN: 978-981-07-5939-1 doi:10.3850/ 978-981-07-5939-1_11 developed by a single engineer but at the same time is referenced by others. Since a significant proportion of elements in the user’s working space is not modified and not directly employed (but elements still need to be referenced), the elements can be referred to the central working copy through soft links, whereas all developed elements remain in the regular working copy. Files tar compress untar decompress Files.tar.gz commit Implementation of the structure presented above is illustrated in Fig. 5. It consists of a three-level hierarchy, adding an additional level to the conventional server-client configuration traditionally employed by the revision control system. update Project Repository Importing (checking in) the design data to the repository is performed with the standard method; i.e., the new structure does not affect this process at all. When the data are to be checked out, instead of being transferred directly from the server to the client, they are initially copied onto a central replication area and then the workspace is created. Each unit from the workspace could either point to the replication area as a soft link or could be represented physically. Soft links have read-only access because blocks that are in the replication area cannot be modified. If modification is required, they must be transferred to the local workspace first. Each block can be converted at any time, replacing a link with local data and vice versa. The replication area cannot be updated (for fixed revisions) in terms of replacing an old version of a block with a newer one, but it can comprise more than one revision of a certain block. Of course, all outdated block versions could be removed once they are no longer required. From the methodology description up to this point, it could be inferred that the replication area behaves as a typical second server, except that its data do not require being backed up, as they can readily be recovered at any given moment and they do not contain any modifications. Figure 4. Details of compressed tar file approach. The first step was carried out in the sequence: Directory existence verification. This step verifies that the directory that was selected to be manipulated exists. If not, the sequence is terminated. SVN management validation. Whether or not the specified directory is already managed by SVN is validated. If SVN managed, the directory must be erased from the SVN repository. Tar file existence verification. This step verifies whether a tar file with an identical name exists. If so, its content is compared against the specified directory. If they are equivalent, the sequence is discontinued. Otherwise, the step is executed and the tar file is either brought into existence or updated. The next step, untar/decompress, is executed on the algorithm: Tar file update verification. Whether the tar file has already been updated in the local directory structure is verified. If not, the algorithm is terminated. SVN management validation. This step validates whether the directory included in the tar file has already been managed by SVN. If it is, the algorithm is discontinued. Workspace tar content SVN management verification. If the directory already exists in the user’s workspace but it is not managed by SVN, it is automatically removed. Next, the tar file is extracted. This approach has the prerequisites of block-oriented design data structure and decent granularity, as discussed previously for Fig. 2. This method smoothly allows each block to be fetched into the working copy either as a soft link or as physical data. For instance, blocks that are never employed could be referred to the central working copy, whereas all the others would remain part of the regular working space. This measure also allows parallel processing and saves disk space. The linked central working copy technique can be implemented with different methods. One is to develop proprietary scripts. However, especially in the field of IC design, this tends to be limited by the ability of the programmers who develop and maintain them (might lack training or experience and might not be diligent). Furthermore, scripts are relatively insufficiently flexible, and even a slight environment alteration could trigger discrepancies and inconsistencies, which are difficult to fix. Therefore, we chose a tool available on the market for implementing the technique: Methodics’ BuildICTM. It is an SoC assembly engine, part of a platform for SoC design management [21]. However, we consider it to be also beneficial for workspace management, as it has a “shared area,” which has the identical functionality as the replication area. The application of the automated transformation of the initial data structure into a monolith block is not only to allow eliminating all trivial tasks, but also to keep this relatively error-prone phase protected. C. Referred Central Working Copy (DM3) This approach differs from the others in that it involves maintaining a central working copy for read-only access. A data unit being designed within an IC project is generally 19 Proc. of the Second Intl. Conf. on Advances in Electronics and Electrical Engineering — AEEE 2013 Copyright © Institute of Research Engineers and Doctors. All rights reserved. ISBN: 978-981-07-5939-1 doi:10.3850/ 978-981-07-5939-1_11 Workspace Replication area Repository Block A r/w Block A Block A Block B r/o Block B.revX Block B Block C r/o Block C.revY Block C Figure 5. Schematic of the referred central working copy approach. Blocks that are not required in the user’s workspace are referred to the central working copy. IV. suitable for tar and untar because of the infrequency of modifications to them. User Case Another hurdle is the extra step of transforming the initial data into a single block. This should be considered a critical phase, due to its routine and error prone character. A possible solution is an automation of the process of handling the data with a wrapper script. Although application of the automation script mitigates the problem, it should still be considered critical. By presumption, any directory with a name equivalent to the content of the tar file is deleted because we determine that by and large, it is left from any preceding execution; however, the folder might contain important, modified, but not yet committed project data as well. Moreover, due to the development and maintenance of scripts, a further level of complexity is added to the project design flow. The design methodologies that were described in the previous section were incorporated into the CM workflow of our ZSPM1000 Smart Power Management (SPM) IC project. The ZSPM1000 is a configurable, true-digital single-phase pulse-width-modulation (PWM) controller for high-current, non-isolated DC/DC power supplies supporting switching frequencies up to 1 MHz. It includes a PMBus™-configurable digital power control loop that incorporates output voltage sensing, average inductor current sensing, and extensive fault monitoring and handling options. Project data comprised 48 blocks with a total of 50,000 files and a total size of 5.9GB. A. DM2 Initially, DM2 was implemented from the foundation of the project. According to the design team, this principle allowed them to solve the SVN performance limitations; however, the improved efficiency comes at a cost. The application of SVN on a file level is not possible. Standard features of revision control systems, such as file history, revert, selective checkout and locking functionalities, are not available. B. DM3 and DM1 Due to the drawbacks identified in DM2, an alternative combined DM3 and DM1 CM workflow was integrated into the ZSPM1000 project. The workflow achieves an improved performance compared with DM2. This can be explained by the parallel algorithm in which operations are performed. When the same operation is executed in a sequential manner, the replication area is only composed during the first execution and reused later. Hence, the most rapid execution time is achieved when the replication area has already been brought into existence because of the minimal time that is required for establishing soft links. Furthermore, changes between different revisions are not traceable. For example, the following fundamental questions for revision control systems cannot be answered. Which files were changed? Who made the change? What was changed in the file? What did the file contain in a particular revision? All difference comparison futures are inevitably lost. The graphical user interface support, which is one of the focal advantages of SVN, is no longer efficacious. In contrast, if the replication area does not already exist and all units are required to be available in a local workspace, the longest time is needed. However, in this project experience, only a small number of blocks were required in the users’ workspace for write access (locally). This reflects a dedicated designer who needs to reference all blocks during a project lifetime but modifies only a minority of them. In our efforts to address these issues, a policy of extensive commit messages was put in force. Nonetheless, we rapidly came to realize that this hardly helps when design teams are spread over different locations. Even so, in our experience, the issue can be avoided if an intellectual property (IP) project design is employed. An IP constitutes a standard functional block that is part of the IC but was developed separately from the project either internally or sourced by a third party. During the design phase, IP blocks are immutable. As such, they are The improved performance is explained by the reduced quantity of items that SVN handles (during checkout) per operation and also by the parallel manner in which the commands are executed. Furthermore, the establishment of links is performed in very little time. It should be noted that in 20 Proc. of the Second Intl. Conf. on Advances in Electronics and Electrical Engineering — AEEE 2013 Copyright © Institute of Research Engineers and Doctors. All rights reserved. ISBN: 978-981-07-5939-1 doi:10.3850/ 978-981-07-5939-1_11 developed by a single engineer but at the same time is referenced by others. Since a significant proportion of elements in the user’s working space is not modified and not directly employed (but elements still need to be referenced), the elements can be referred to the central working copy through soft links, whereas all developed elements remain in the regular working copy. Files tar compress untar decompress Files.tar.gz commit Implementation of the structure presented above is illustrated in Fig. 5. It consists of a three-level hierarchy, adding an additional level to the conventional server-client configuration traditionally employed by the revision control system. update Project Repository Importing (checking in) the design data to the repository is performed with the standard method; i.e., the new structure does not affect this process at all. When the data are to be checked out, instead of being transferred directly from the server to the client, they are initially copied onto a central replication area and then the workspace is created. Each unit from the workspace could either point to the replication area as a soft link or could be represented physically. Soft links have read-only access because blocks that are in the replication area cannot be modified. If modification is required, they must be transferred to the local workspace first. Each block can be converted at any time, replacing a link with local data and vice versa. The replication area cannot be updated (for fixed revisions) in terms of replacing an old version of a block with a newer one, but it can comprise more than one revision of a certain block. Of course, all outdated block versions could be removed once they are no longer required. From the methodology description up to this point, it could be inferred that the replication area behaves as a typical second server, except that its data do not require being backed up, as they can readily be recovered at any given moment and they do not contain any modifications. Figure 4. Details of compressed tar file approach. The first step was carried out in the sequence: Directory existence verification. This step verifies that the directory that was selected to be manipulated exists. If not, the sequence is terminated. SVN management validation. Whether or not the specified directory is already managed by SVN is validated. If SVN managed, the directory must be erased from the SVN repository. Tar file existence verification. This step verifies whether a tar file with an identical name exists. If so, its content is compared against the specified directory. If they are equivalent, the sequence is discontinued. Otherwise, the step is executed and the tar file is either brought into existence or updated. The next step, untar/decompress, is executed on the algorithm: Tar file update verification. Whether the tar file has already been updated in the local directory structure is verified. If not, the algorithm is terminated. SVN management validation. This step validates whether the directory included in the tar file has already been managed by SVN. If it is, the algorithm is discontinued. Workspace tar content SVN management verification. If the directory already exists in the user’s workspace but it is not managed by SVN, it is automatically removed. Next, the tar file is extracted. This approach has the prerequisites of block-oriented design data structure and decent granularity, as discussed previously for Fig. 2. This method smoothly allows each block to be fetched into the working copy either as a soft link or as physical data. For instance, blocks that are never employed could be referred to the central working copy, whereas all the others would remain part of the regular working space. This measure also allows parallel processing and saves disk space. The linked central working copy technique can be implemented with different methods. One is to develop proprietary scripts. However, especially in the field of IC design, this tends to be limited by the ability of the programmers who develop and maintain them (might lack training or experience and might not be diligent). Furthermore, scripts are relatively insufficiently flexible, and even a slight environment alteration could trigger discrepancies and inconsistencies, which are difficult to fix. Therefore, we chose a tool available on the market for implementing the technique: Methodics’ BuildICTM. It is an SoC assembly engine, part of a platform for SoC design management [21]. However, we consider it to be also beneficial for workspace management, as it has a “shared area,” which has the identical functionality as the replication area. The application of the automated transformation of the initial data structure into a monolith block is not only to allow eliminating all trivial tasks, but also to keep this relatively error-prone phase protected. C. Referred Central Working Copy (DM3) This approach differs from the others in that it involves maintaining a central working copy for read-only access. A data unit being designed within an IC project is generally 19 Proc. of the Second Intl. Conf. on Advances in Electronics and Electrical Engineering — AEEE 2013 Copyright © Institute of Research Engineers and Doctors. All rights reserved. ISBN: 978-981-07-5939-1 doi:10.3850/ 978-981-07-5939-1_11 Workspace Replication area Repository Block A r/w Block A Block A Block B r/o Block B.revX Block B Block C r/o Block C.revY Block C Figure 5. Schematic of the referred central working copy approach. Blocks that are not required in the user’s workspace are referred to the central working copy. IV. suitable for tar and untar because of the infrequency of modifications to them. User Case Another hurdle is the extra step of transforming the initial data into a single block. This should be considered a critical phase, due to its routine and error prone character. A possible solution is an automation of the process of handling the data with a wrapper script. Although application of the automation script mitigates the problem, it should still be considered critical. By presumption, any directory with a name equivalent to the content of the tar file is deleted because we determine that by and large, it is left from any preceding execution; however, the folder might contain important, modified, but not yet committed project data as well. Moreover, due to the development and maintenance of scripts, a further level of complexity is added to the project design flow. The design methodologies that were described in the previous section were incorporated into the CM workflow of our ZSPM1000 Smart Power Management (SPM) IC project. The ZSPM1000 is a configurable, true-digital single-phase pulse-width-modulation (PWM) controller for high-current, non-isolated DC/DC power supplies supporting switching frequencies up to 1 MHz. It includes a PMBus™-configurable digital power control loop that incorporates output voltage sensing, average inductor current sensing, and extensive fault monitoring and handling options. Project data comprised 48 blocks with a total of 50,000 files and a total size of 5.9GB. A. DM2 Initially, DM2 was implemented from the foundation of the project. According to the design team, this principle allowed them to solve the SVN performance limitations; however, the improved efficiency comes at a cost. The application of SVN on a file level is not possible. Standard features of revision control systems, such as file history, revert, selective checkout and locking functionalities, are not available. B. DM3 and DM1 Due to the drawbacks identified in DM2, an alternative combined DM3 and DM1 CM workflow was integrated into the ZSPM1000 project. The workflow achieves an improved performance compared with DM2. This can be explained by the parallel algorithm in which operations are performed. When the same operation is executed in a sequential manner, the replication area is only composed during the first execution and reused later. Hence, the most rapid execution time is achieved when the replication area has already been brought into existence because of the minimal time that is required for establishing soft links. Furthermore, changes between different revisions are not traceable. For example, the following fundamental questions for revision control systems cannot be answered. Which files were changed? Who made the change? What was changed in the file? What did the file contain in a particular revision? All difference comparison futures are inevitably lost. The graphical user interface support, which is one of the focal advantages of SVN, is no longer efficacious. In contrast, if the replication area does not already exist and all units are required to be available in a local workspace, the longest time is needed. However, in this project experience, only a small number of blocks were required in the users’ workspace for write access (locally). This reflects a dedicated designer who needs to reference all blocks during a project lifetime but modifies only a minority of them. In our efforts to address these issues, a policy of extensive commit messages was put in force. Nonetheless, we rapidly came to realize that this hardly helps when design teams are spread over different locations. Even so, in our experience, the issue can be avoided if an intellectual property (IP) project design is employed. An IP constitutes a standard functional block that is part of the IC but was developed separately from the project either internally or sourced by a third party. During the design phase, IP blocks are immutable. As such, they are The improved performance is explained by the reduced quantity of items that SVN handles (during checkout) per operation and also by the parallel manner in which the commands are executed. Furthermore, the establishment of links is performed in very little time. It should be noted that in 20 Proc. of the Second Intl. Conf. on Advances in Electronics and Electrical Engineering — AEEE 2013 Copyright © Institute of Research Engineers and Doctors. All rights reserved. ISBN: 978-981-07-5939-1 doi:10.3850/ 978-981-07-5939-1_11 The work reported in this study was funded by the Seventh Framework Program of the EC under grant agreement no. 237955 (FACETS-ITN). contrast to the tar-untar approach, all benefits of the revision control system are preserved, which is a key asset of this mixed technique. Several inconveniences were explored during the lifetime of the project however. Because of DM1, it is not possible to use a single operation to atomically commit multiple blocks that have been modified with related refinements because each block must be managed separately (Fig. 3). It was reported by the design team that although this slightly affects the revision control tool history by adding further revision numbers for each manipulation, it does not influence the workflow. Additional feedback confirmed that the application of this technique leads to significant performance improvement and reduction of the amount of data per operation. References [1] [2] [3] [4] [5] The replication area in DM3 does not particularly affect the IC design workflow. It is not a critical element and does not require any special maintenance measures. Even though this area might be considered to consume extra disk space, it actually saves space because the user’s working copies are reduced in size. The central working copy can be updated automatically by a post-commit script. [6] [7] [8] Nonetheless, the disadvantage of the automatic updating mechanism is the alteration of files without prior notice; for example, when a designer refers to a head revision of a given block and that block has changed. As a result, the designer would automatically be referred to the new head revision. However, in addition to the discomfort of unexpected changes in the working copy structure, this can lead to breaking the environment; e.g., regressions and debug sessions, which count on stable data. Hence, we chose to set up links to the central working area for fixed revisions and to update them when required. V. [9] [10] [11] [12] Summary and Future Work [13] This paper has presented three different approaches that adapt the CM tool Subversion® to dealing with a multiplicity of managed items. Since each of the proposed approaches was ingrained in an industrial IC project flow from “scratchpad” to the final product, they were compared and validated in a realistic environment. [14] [15] The demonstrated methods will be particularly beneficial in future IC design projects for which the revision control tool would be stretched to its breaking point with the increased quantity of project data. [16] [17] Thus far we have mainly explored the design mythology from the user’s perspective. Our next step in this research will be to perform a performance case study. We are interested in comparing the performance improvement of each technique. [18] [19] [20] [21] Acknowledgment The authors wish to thank the design team for the ZSPM1000 project at ZMD AG for their technical assistance and feedback during the lifetime of the project as well as Ms. M. Wallace and Mr. D. Aitken for proofreading the manuscript. 21 828-2012 – IEEE Standard for Configuration Management in Systems and Software Engineering, March 2012. C. Kidd, “The case for configuration management,” IEE Review, vol. 47, pp. 37-41, September 2001. K. Hinsen, K. Läufer, and G. Thiruvathukal, “Essential tools: version control systems,” IEEE Computer in Science & Engineering, vol. 11, pp. 84-91, November-December 2009. M. Rochkind, “The source code control system,” IEEE Transactions on Software Engineering, vol. 4, pp. 364-370, December 1975. K. H. Lee, “Design and implementation of a configuration management system,” Global Telecommunications Conference, vol. 3, pp.1563-1567, November-December 1993. A. Do, “The impact of configuration management during the software product's lifecycle,” Digital Avionics Systems Conference, vol. 1, pp. 1.A.4-1 – 1.A.4-8, November 1999. A. Chan and S. Hung, “Software configuration management tools,” Software Technology and Engineering Practice, pp. 238-250, July 1997. H. Yue, X. Liu, and S. Zhao, “Evaluate two software configuration management tools: MS Perforce and Subversion,” Computational Intelligence and Software Engineering, pp. 1-6, December 2010. D. Kim and C. Youn, “Traceability enhancement technique through the integration of software configuration management and individual working environment,” Secure Software Integration and Reliability Improvement, pp. 163-172, June 2010. X. Wang, W. Chen, Y. Wang, and H. You, “The design and implementation of hardware task configuration management unit on dynamically reconfigurable SoC,” Embedded Software and Systems, ICESS '09, pp. 179-184, May 2009. M. Mehendale, “SoC – the road ahead,” IEEE VLSI Design, January 2006. J. Burns, “Technology trends and implications on SoC design,” IEEE SoC Conference, pp. 386, September 2011. D. Bell, “Performance tuning Subversion,” IBM developerWorks, May 2007, accessed August 2012 http://www.ibm.com/developerworks/java/ library/j-svnbins/index.html. R. Prahov, H. Schmidt, E. Müller, and A. Graupner, “Subversion(r): an Empirical Performance Case Study from a Collaborative Perspective on Integrated Circuits and Software Development,” ICSESS, in press. R. Prahov, E. Müller, and A. Graupner, “Configuration Management from the Perspective of Integrated Circuit Design,” IEEE 27th Convention of Electrical and Electronic Engineers in Israel, pp. 1-5, November 2012. P. Cederqvist, et al., “Version Management with CVS” (for cvs 1.12.13), accessed August 2012, http://ximbiot.com/cvs/manual/, 2005. “The open source development report,” Eclipse Survey Report 2011, June 2011, accessed August 2012, http://www.eclipse.org/org/community _ survey/Eclipse_Survey_2011_Report.pdf. B. Collins-Sussman, B. Fitzpatrick, and C. Pilato, “Version Control with Subversion” (for Subversion 1.7), 2011, http://svnbook.red-bean.com/. B. Berliner, “CVS II: Parallelizing software development”, USENIX Winter 1990 Technical Conference, pp. 341–352, 1990. The OpenOffice™ project, http://www.openoffice.org/. “The BuildIC™ SoC Development Platform,” Methodics, accessed August 2012, http://www.methodics-da.com/products/projectic. What‘s Behind Digital Power by Herman Neufeld (ZMDI) May 1st, 2013, Electronic Products (USA) 1 Energy-Saving Initiative SPonSored By an electronic products special series What’s Behind Digital Power Control? By Herman neufeld Senior member of the Technical Staff at ZmdI [email protected] l ike any new technique that is introduced in the market, digital power control must first prove that it offers important advantages over state-of-the-art analog techniques. In this vein, the first and foremost issue to be addressed is the price, and the secondary considerations are converter size, performance and efficiency. This article covers these issues and also discusses digital power control from a broad standpoint. What is digital power? digital power, as the term implies, is a technique used for converting power via digital control means. Instead of using analog components, such as operational amplifiers and comparators, it uses a digital controller. Both control techniques are designed to ensure that the power stage switches at the right moment in every switching cycle in order to properly regulate the output voltage. deviations from the correct switching instant lead to deteriorating performance, instabilities, and in extreme cases to malfunction of the load that is being powered. Therefore, performance—not just price—should be something to closely consider. In fact, one of the major differentiators between digital and analog power control is performance. Cost for the typical power supply designer, analog technology has been proven to deliver good and efficient power converters. So why change? Why spend more money on a digital controller? What is clearly overlooked here is that not all converter applications require a digital controller. Take, for example, a converter that is required to produce 5 V at 1 a. In this case, analog control is the best choice—a conclusion based purely on price. There are also many analog controllers available on the market. as a rule of thumb, one could state that analog control is the preferred choice for converters with output voltages above 3 V and currents below 10 a. digital controllers are not meant to compete against these analog controllers, especially when price is important and analog performance is the settings on the controller and issue PmBus commands to change them. The equivalent of an aSIC can also be realized by modifying the firmware in order to meet the customer’s needs. This, however, is done by the IC manufacturer. The cost savings compared to an analog aSIC are achieved because the IC itself does not change. further reductions in cost can be achieved via a fully automated production process that is possible with a digital controller. The converter can be programmed, Fig. 1: Configuration setup for the ZSPM1000 digital controller. perfectly adequate for the application. However, for a fast-growing market of servers, routers, switches and embedded controls, the converters that power these applications require a much higher level of performance than analog controllers can offer. loads such as field-programmable gate arrays (fPGas), processors, memory banks and similar digital blocks need to interact with the converter feeding them. analog controllers with a digital interface are also available on the market, but, they are not as flexible as digital controllers when requirements change. aSICs also require development time and cost. With a digital controller, such as ZmdI’s ZSPm1000, the user is able to configure MAY 2013 • electronicproducts.com • ElEctronic Products tested, and calibrated without the need for human intervention. design time also needs to be factored into the cost of the converter. When using a digital controller, the converter design does not need to be done by a power supply specialist. It can be done by the very same digital hardware design engineer developing the board. last-minute changes can be quickly implemented because the requirements are programmed into the digital controller, something that can also be done “on the fly.” Performance The kinds of applications addressed by digital controllers are typically more Advertisement 3 Energy-Saving Initiative SPonSored By an electronic products special series involved than those for analog controllers. Consider, for instance, a fast-occurring 20-a load step at an output voltage of 1 V. at this low voltage, a 200-mV deviation on the output (20%) may be unacceptable for many applications. resorting to adding more output capacitance on the board in order to minimize output voltage deviations unnecessarily burdens the bill of materials cost while it also slows down the converter’s response. With a digital controller, implementing advanced transient response algorithms, such as ZmdI’s State-law Control, reduces expenses while improving transient response. It is also important to know that the lC filter on the output of a dC/dC converter does not exhibit real poles that can be compensated for by the error amplifier’s compensation network. The poles are actually complex, and their position depends on the Q factor of the filter. an unconditionally stable converter can be designed with an analog controller, but at the expense of performance that can easily be obtained from a digital controller. Just imagine having a converter with feedback and feed-forward networks that adapt continuously to your converter’s operating conditions. This is what is achieved with digital power control. Converter size The size of a digital converter will typically differ from that of an analog converter depending on the total number of external components needed in order to address the features required by the load. as far as controller size is concerned, it is important to know that process geometries have become smaller in the past years, allowing digital circuits to benefit from this because they can be scaled down in size much more readily than their analog counterparts. as evidence of this, ZmdI’s high-performance digital PWm controller, the ZSPm1000, comes in a 4 x 4-mm Qfn package. Small size also means less silicon area and hence lower cost. Efficiency analog controllers can provide high efficiency over a wide range of output currents by switching between two modes of operation. one is a constant or pseudo- power delivered by a digitally controlled converter and the digital controller’s active supply current. The digital controller’s operating current is generally higher than its analog counterpart, but for the Fig. 2: Comparison of the transient response of a digital controller (pink trace) vs. an analog controller (white trace). constant frequency mode for continuous conduction of the inductor current, and the other is a pulse-skipping mode for light loads in which the inductor current reaches zero within every switching cycle, and the switching cycle is determined by the droop time of the output capacitors. digital control does this too, but for output voltages of approximately 1 V and currents in the tens of amperes, additional considerations must be addressed in order to minimize conduction losses and save energy via the various standby and sleep-mode techniques. once again, a digital controller becomes the ideal choice because it is able to be programmed in various operating modes. In order to reduce conduction losses, a driver moS or drmoS power stage is also employed to work alongside the digital controller, for example the ZSPm9060 from ZmdI, which can deliver an average current of up to 60a. This part has been optimized to provide a very high efficiency. another aspect that tends to be overlooked by power supply designers is the relationship between the maximum MAY 2013 • electronicproducts.com • ElEctronic Products power levels it controls, this current becomes an insignificant fraction of the total power budget. PoL modules another application that also fits very well for digital control is point-of-load (Pol) modules. Producing dC/dC converter modules requires a high degree of automated production. Variations in module outputs can be easily configured, either via PmBus or via pin-strapping. The module manufacturer can also tailor the module’s characteristics in order to further optimize it to the load. Future trends in digital power as the number of digital boards continues to increase and the trend toward more energy-efficient designs continues to dominate, digital power will continue to see a high growth potential in the coming years. Cost savings can be obtained through fast design turnaround times, savings in staff personnel, savings in production costs, and faster time to market. ☐ Advertisement Interview with Thilo von Selchow, President and CEO of ZMDI May 14, 2013 in EEWeb Pulse (USA) Siege und Nierderlagen July 25th, 2013 in Handelsblatt (Germany) 4 TITELTHEMA DIENSTAG, 23. JULI 2013, NR. 139 2 ► Ökonomen plädieren für mehr Wahrhaftigkeit. 300 ► Die neuen Länder haben den Rückstand nicht aufgeholt. 2016 2015 Schätzung 2017 250 Die Soli-Lüge D Der Soli ist eine reine Abzocke der Steuerzahler durch eine Große Koalition aus Union, Grünen, SPD und der Linken. Reiner Holznagel, Präsident des Bundes der Steuerzahler er Solidaritätszuschlag ist von Mythen umgeben. Ein Mythos lautet, die Ergänzungsabgabe werde allein von den Westdeutschen gezahlt – obwohl auch die Ostdeutschen finanzielle Solidarität mit sich selbst zeigen müssen. Das führt zu einem weiteren Mythos: Der Soli wird als gelebte Solidarität des Westens mit dem Osten dargestellt. Tatsächlich aber betonte die Bundesregierung schon Anfang 1997, der Begriff „Solidarität“ beziehe sich vor allem auf die Ausgestaltung der Abgabe, die „ausnahmslos alle Steuerzahler – entsprechend ihrer ökonomischen Leistungsfähigkeit – belastet. Um solchen Mythen die Grundlage zu entziehen, empfiehlt der Konjunkturchef des Wirtschaftsforschungsinstituts Halle, Oliver Holtemöller: „Der Solidaritätszuschlag sollte in den Einkommensteuertarif eingearbeitet werden, damit die Missverständnisse aufhören.“ Im nächsten Schritt könne man sich dann darüber Gedanken machen, ob die Höhe der steuerlichen Extrabelastung von 5,5 Prozent insgesamt angemessen sei. Für Steuersenkungen gebe es aber nur Raum, wenn Ausgaben gekürzt würden. Auch der Chef des Zentrums für Europäische Wirtschaftsforschung (ZEW), Clemens Fuest, sagte, wer den Soli abschaffen wolle, müsse eine „Gegenfinanzierung“ für den Bundeshaushalt mitliefern. Fuest schlägt vor: „Der Solidaritätszuschlag sollte umbenannt werden in Bundeseinkommenssteuerzuschlag.“ Denn das Geld fließe nicht speziell in die neuen Länder – der Begriff sei „irreführend“. Jenseits semantischer Probleme verteidigte CDU-Generalsekretär Hermann Gröhe am Montag trotz Kritik aus den eigenen Reihen den Vorstoß der Bundeskanzlerin. Er sehe „keinen Entlastungsspielraum in der kommenden Legislaturperiode“, sagte Gröhe nach einer Sitzung des CDU-Bundesvorstands. Die Frage einer Abschaffung des Zuschlags stelle sich deshalb nicht. Das Ziel, den Haushalt in Ordnung zu bringen, bedeute auch, dass umfassende Steuersenkungen – eine Abschaffung des Solis bedeutete eine jährliche Entlastung der Steuerzahler von 13 bis 14 Milliarden Euro – nicht auf der Tagesordnung stehen könnten. Merkel will den Solidaritätszuschlag beibehalten, aber die spezifi- Helmut Kohl: Der damalige Kanzler kündigte 1996 das Ende des Solis an. sche Förderung Ostdeutschlands nach 2019 beenden. Dann läuft der Solidarpakt II aus. „Manche Regionen in den neuen Ländern stehen wirtschaftlich besser da als Teile der alten Bundesrepublik“, hatte die CDU-Vorsitzende am Wochenende gesagt. Zur Begründung führte Merkel die Forderung von Thüringens Ministerpräsidentin Christine Lieberknecht (CDU) an, die zu Recht darauf hingewiesen habe, dass nach dem Ende des Solidarpakts II die spezifische Förderung für den Osten in eine Förderung nach regionaler Notwendigkeit umgewandelt werden könnte. Das heißt, dass Merkel diesen Teil des Finanzausgleichs zwar umwidmen, aber grundsätzlich beibehalten will. Die SPD hat dagegen vor allem Spott für die Regierungskoalition übrig. „Einmal mehr wird mit viel Theaterdonner ein steuerpolitisches Fass aufgemacht“, kommentierte Fraktionsvize Joachim Poß die schwarz-gelbe Debatte über die Abschaffung des Solidaritätszuschlags. Die Rollen bei diesem „Uralt-Stück“ seien wohlbekannt: Die FDP versuche mit einer Soli-Diskussion im Sommerloch Anlauf für den Sprung über die Fünfprozenthürde bei der Bundestagswahl am 22. September zu nehmen. Was Poß nicht erwähnte: SPD-Ministerpräsidenten scheuen sich nicht, die Erhebung des Zuschlags über das Jahr 2019 hinaus zu fordern. So hatte NRW-Ministerpräsidentin Hannelore Kraft schon in der vergangenen Woche deutlich gemacht, dass sie es für gerechtfertigt hält, nach 2019 einen neuen Sonderfonds zu beginnen. Die Strukturförderung sei jedoch auf das Prinzip „Bedürftigkeit statt Himmelsrichtung“ umzustellen. „Dies wird auch im Rahmen der Verhandlungen für einen neuen Länderfinanzausgleich eine Rolle spielen“, sagte Kraft. Hamburgs Erster Bürgermeister Olaf Scholz wirbt ebenfalls seit längerem dafür, den Solidaritätszuschlag als „Ergänzungsabgabe“ auch nach 2019 durch den Bund zu erheben. Ganz anders hingegen der Präsident des Bundes der Steuerzahler, Reiner Holznagel. Angesichts von Rekordsteuereinnahmen sei ein Festhalten am Soli „reine Abzocke der Steuerzahler durch eine Große Koalition aus Union, SPD, Grünen und der Linken“. Heike Anger, Michael Brackmann, Dorit Heß, Jens Münchrath, Thomas Sigmund © Handelsblatt GmbH. Alle Rechte vorbehalten. Zum Erwerb weitergehender Rechte wenden Sie sich bitte an [email protected]. 2013 2012 2011 200 2010 2009 2007 2006 150 2006 2005 2004 2003 100 2002 2001 2000 1999 1998 50 1997 1996 1995 1992 1991 0 Erzielte und erwartete Einnahmen durch den Solidaritätszuschlag Amtliche Daten des Bundesfinanzministeriums AUFBAU OST Siege und Ni Auch im Osten gibt es Erfolgsgeschichten – doch die Abwanderung der Bürger in den Westen geht weiter. Silke Kersting, Norbert Häring Berlin, Frankfurt D ddp images Forstetzung von Seite 1 2014 Handelsblatt | 1) gesetzlich festgeschrieben; 2) zwischen Bund und Ländern vereinbart | Quellen: Destatis, Bund der Steuerzahler, HB, Die Welt, BA, Arbeitskreis Volkswirtschaft Gesamtrechnung der Länder 300,8 Mrd. € ► Mit Ausnahme der FDP halten alle Parteien am Soli fest. resden boomt. Der Mikrotechnologie-Cluster in der sächsischen Landeshauptstadt genießt Weltruf. Viele High-Tech-Firmen haben sich angesiedelt. Ebenso Jena: Die thüringische Stadt hat nach der Wende auf optische Technologien gesetzt und gilt heute mit Jenoptik und Carl Zeiss Meditec als Vorzeigestandort. In beiden Städten hat die Bundesregierung nach dem Mauerfall die in der DDR entstandene Grundstruktur in der Mikround Optoelektronik gezielt gefördert. Am Dresdener Stadtrand entstanden so hochsubventionierte Chipfabriken, die noch STREIT ÜBER SOLIDARITÄTSZUSCHLAG 5 DIENSTAG, 23. JULI 2013, NR. 139 2 HERMANN OTTO SOLMS Solidaritätszuschlag bis 2017 Wie der Bund am Soli verdient 2012 Aufkommen aus dem Solidaritätszuschlag 13,6 Mrd. € An die neuen Bundesländer1 10,8 Mrd. € davon Förderprogramme des Bundes2 3,5 Mrd. € 20 im Zeitraum 2005 bis 2019 „Der Vorstoß der Kanzlerin ist ein Vertrauensbruch“ 2019 17,5 Mrd. € 3,1 Mrd. € 1,0 Mrd. € 207,8 Mrd. € A ls Vorsitzender der FDP-Bundestagfraktion hat Hermann Otto Solms den Soli 1995 wieder miteingeführt – heute streitet der Vizepräsident des Deutschen Bundestags für seine Abschaffung. Das Argument des 72-Jährigen: Die Abgabe war zur Finanzierung der Einheit zeitlich befristet angelegt. Schätzung Aufkommen 15 105,4 Mrd. € Zuweisungen an die neuen Bundesländer Herr Solms, die Kanzlerin will den Solidaritätszuschlag über 2019 hinaus beibehalten. Die Bürger haben erwartet, dass die Steuer nicht endlos weiter erhoben wird. Muss man da nicht von einer Soli-Lüge sprechen? Solms: Der Vorstoß führt zu einem Vertrauensbruch gegenüber den Wählern. Die Bürger haben fest damit gerechnet, dass der Soli in einem überschaubaren Zeitraum entfällt. Das wäre jetzt in weite Zukunft gerückt, sollte sich die Union hier durchsetzen. 10 50,7 Mrd. € Förderprogramme in den neuen Bundesländern 5 51,7 Mrd. € 0 2005 ’06 ’07 ’08 ’09 2010 ’11 2012 ’13 ’14 2015 ’16 ’17 ’18 2019 Werner Schuering/imagetrust Differenz zugunsten des Bundes Ost-West-Vergleich Neue Bundesländer Alte Bundesländer 35 000 Durchschnitt Deutschland 25 % 40 000 20 % 35 000 15 % 30 000 10 % 25 000 Hermann Otto Solms: „Die Grundlage für den Soli gibt es nicht mehr.“ 30 000 Was stört Sie am Soli konkret? Als Schwarz-Gelb unter Helmut Kohl den Zuschlag 1995 wieder einführte, war ich Fraktionschef der FDP im Bundestag. Wir waren uns damals einig: Der Soli sollte zur Finanzierung der Einheit dienen. Nachdem dieser Zweck 2019 ausläuft, ist die Grundlage für den Soli entfallen. Jetzt müssen die Bürger hören, dass das alles Makulatur sei. Die Union will das Geld für andere Zwecke einsetzen. 25 000 20 000 15 000 10 000 Durchschnittliches Bruttoinlandsprodukt pro Kopf und Jahr in Euro 5 000 5% 0 20 000 Arbeitslosenquote in Prozent 0% 1991 1995 2000 2005 2012 1991 1995 2000 2005 2012 15 000 1991 Durchschnittliches Arbeitnehmerentgelt pro Jahr in Euro 1995 2000 2005 2012 ederlagen heute wichtige Standbeine der sächsischen Wirtschaft sind. Auch Unternehmen mit Wurzeln in der DDR haben sich behauptet. Zum Beispiel das Zentrum für Mikroelektronik Dresden (ZMDI). Es wurde vor mehr als 50 Jahren gegründet und galt lange als Herzstück der Mikroelektronikforschung der DDR. ZMDI ist heute weltweit aktiv und auf den Bau von Mikrochips konzentriert, die Autos oder Beleuchtungsanlagen energieeffizienter machen. Es gibt sie, die Positivbeispiele in den neuen Ländern. Einerseits Unternehmen aus der früheren DDR, Rotkäppchen etwa, eine ostdeutsche Sektmarke, die heute auch gern im Westen gekauft wird. Andererseits umsatzstarke Unternehmen wie der Berliner Energieanbieter Vattenfall, eine Tochter des schwedischen VattenfallKonzerns. Doch genau da liegt das Problem: In den neuen Bundesländern sind in der Mehrzahl Tochtergesellschaften internationaler oder westdeutscher Konzerne vertreten. Große Firmenzentralen gibt es so gut wie nicht im Osten Deutschlands. Ausnahmen sind die Deutsche Bahn oder die Dienstleistungsgruppe Dussmann, die ihren Sitz in Berlin haben. Die Erfolgsgeschichten kommen häufig von Unternehmen mittlerer Größe, etwa Biotronik oder Eckert & Ziegler. Davon profitiert auch der Arbeitsmarkt. Die Arbeitslosigkeit in den neuen Ländern ist derzeit so niedrig wie seit 1991 nicht mehr. Mit knapp 9,9 Prozent beträgt sie allerdings immer noch das 1,7-Fache des Westniveaus. So groß war der Abstand auch von 1994 bis 1997. Bei stagnierender Konjunktur war er allerdings auch schon merklich größer. Hinzu kommt, dass der Wegzug von Arbeitnehmern die Arbeitslosenquote in den neuen Bundesländern gedrückt hat, was zeigt, dass sich die Lebensbedingungen nicht angeglichen ha- 18,7 % der sozialversicherungspflichtigen Stellen in Deutschland liegen in den neuen Ländern. ben. In den vergangenen zehn Jahren haben die neuen Länder sieben Prozent ihrer Bevölkerung verloren, im Westen betrug der Rückgang nur 1,5 Prozent. Auch beim Blick auf die Beschäftigungsentwicklung gibt es wenig zu feiern. Mitte 1992 stellten die neuen Länder noch knapp 23 Prozent der gesamtdeutschen sozialversicherungspflichtigen Arbeitsplätze. Ende 2012 lag der Anteil mit 18,7 Prozent allerdings so tief wie noch nie seit der Wiedervereinigung. Einzig beim Lohnniveau sind der Osten und der Westen einander näher gekommen. Von 57 Prozent des Westniveaus 1991 stieg das durchschnittliche Lohnniveau im Osten auf 82 Prozent 2012. Seit dem Jahr 2009 hat sich diese Entwicklung jedoch nicht weiter fortgesetzt. Insgesamt spiegelt das auch die Angleichung der Wirtschaftskraft wider – jedenfalls, wenn man sie auf die im Osten deutlich schneller sinkende Bevölkerung bezieht. Von 43 Prozent des Westniveaus stieg die relative Wirtschaftsleistung pro Einwohner bis 2009 auf 72 Prozent. 2012 lag sie mit 71 Prozent des Westniveaus aber wieder etwas niedriger. Die Kanzlerin will das Geld in Infrastrukturprojekte stecken. Was haben Sie dagegen ? Ich bestreite doch nicht den Finanzierungsbedarf von maroden Brücken oder Straßen. Doch dieser Vorstoß passt zur gegenwärtigen Steuerdiskussion. SPD und Grüne wollen den Menschen über höhere und neue Steuern an den Geldbeutel. Die Union hat ein Füllhorn von Wahlgeschenken ausgebreitet, für den sie den Soli zweckentfremden will. Ich bin aber jetzt schon ein paar Jahre im Bundestag und weiß, was mit solchen Mitteln gerne passiert. Was denn ? Die mittelständischen Unternehmen investieren weit mehr als 50 Prozent ihrer Erträge. Der Investitionsanteil an den Staatsausgaben beträgt nur neun Prozent. Wenn die Einnahmen der Wirtschaft durch Steuern gekürzt werden, führt dies auch zu einer Reduzierung der Investitionen. Damit verspielt man die Zukunft. Allein die Wahlversprechen der Union bewegen sich im zweistelligen Milliardenbereich. Wenn es jetzt heißt, man wolle die Mittel des Soli nach 2019 gesamtdeutsch zweckmäßig einsetzen, habe ich meine Zweifel. Sie glauben nicht, dass das Geld für Investitionen in die Infrastruktur ausgegeben wird? Gestern kamen aktuelle Zahlen zu den Steuereinnahmen im ersten Halbjahr 2013. Der Staat schwimmt im Geld, doch er kommt nie damit aus. Die Koalition hat sich nun dazu durchgerungen, einen strukturell ausgeglichen Haushalt für das Jahr 2014 vorzulegen. Wir wollen das eben nicht wie die Union über höhere Steuern oder die Fortsetzung von finanziellen Belastungen erreichen. Die FDP will den Haushalt konsolidieren, ohne die Steuern zu erhöhen. Die Fragen stellte Thomas Sigmund. Sensor signal-conditioning ICs ease the design of sensor systems by David Grice (ZMDI) October 1st, 2013 in Electronic Industry (USA) 1 Energy-Saving Initiative sponsored by an electronic products special series Sensor signal-conditioning ICs ease the design of sensor systems Cost effective and power efficient, sensor-signal-conditioning ICs deliver high precision and accuracy if implemented properly BY DAVID GRICE Field Application Engineer, ZMDI, www.zmdi.com T he market for sensors and sensor-related components is a high-growth industry expected to expand in automotive, industrial, medical, and consumer applications. Products such as media players, tablet PCs, and smartphones are driving significant growth in the sensor market, requiring a related increase in the number of designers and manufacturers integrating sensors into modules for resale or for their own products. The wide range of sensing element types and demands for faster time to market and lower costs present numerous challenges, even for veterans of sensor design. The perennial challenge for sensor interface designers is correcting and calibrating the inherent non-idealities present in transducers, typically offset and nonlinear response to stimulus with a temperature dependence for one or both of these factors. There are a host of custom design approaches and solutions to this problem, but the availability of commodity integrated circuits offers designers new choices that are powerful and cost-effective. By combining precise, programmable analog circuitry with high-density digital controllers dedicated to processing correction algorithms, these sensor-signal-conditioner (SSC) ICs reduce the design time and cost of sensor systems while providing the designer with a menu of built-in capabilities and support tools for implementing sensor correction. Understanding the sensor’s characteristics and how to configure its corresponding SSC are key ingredients for obtaining optimum performance and keeping costs low. Overview of sensor correction Transducers exhibit various types and degrees of offset and nonlinear response. The basic idea of calibration and correction is to maximize the usable range and transform the nonlinear response into a predictable linear output that minimizes the error in the sensor output. The nature while the span decreases with increasing temperature. The challenge is to understand what the exact nature of the dependence is and remove its contribution to system error. Plotting the offset and gain versus temperature will reveal another set of curves with linear, quadratic, or higher-order dependence on temperature. Fig. 1: Typical sensor responses to input stimulus. Fig. 2: Sensor output variation over temperature. of non-idealities varies widely between sensor types, and the difficulty and complexity of applying corrections increase in proportion to the magnitude and degree of these undesirable effects. Figure 1 illustrates several types of sensor responses. Each has different basic characteristics and related correction issues. S1 has low offset and relatively low nonlinearity. S2 has a narrow span but a very high offset, which must be removed before applying sufficient gain to create a useful signal level. S3 has a sharp “knee,” and piecewise linear correction is generally a good option for these types of nonlinearities. S4 has an inflection point and would require at least a third-order polynomial correction to achieve a high accuracy over the entire measurement range. Another important factor to consider is how these sensors behave over temperature. Figure 2 shows a typical scenario for the temperature variation of a sensor element. In this case, the offset increases OCTOBER 2013 • electronicproducts.com • ElECTROniC PROduCTs Each individual sensor element will have its own characteristic span and offset with respective temperature dependencies. The type of correction algorithm applied must also account for the type and degree of these differences across variations such as process tolerances, shifts between manufacturing lots, or package stress effects introduced in the next assembly level. Hardware implementation The block diagram shown in Fig. 3 presents a practical and cost-effective approach to sensor calibration and correction. It is a 16-bit resolution resistive-bridge sensor signal conditioner with built-in correction algorithms capable of compensating for a variety of undesirable sensor characteristics. A proprietary microcontroller with 18-bit digital signal processing (DSP) performs the necessary calculations for the correction algorithms using calibration coefficients Advertisement 2 Energy-Saving Initiative sponsored by an electronic products special series Table 1: List of correction algorithms for an SSC showing the number of calibration points and the correction factors applied. Fig. 3: Block diagram of a sensor signal conditioner IC. stored in nonvolatile memory. In addition, this device performs auxiliary operations including temperature sensing and bridge biasing, and it has multiple communication interfaces. It represents a complete solution for interfacing and correcting the output of a sensor bridge, providing a precise, accurate, and compensated sensor output. Getting to know your sensor One of the most important and effective tasks that the designer can carry out is a thorough characterization of the sensor element. Time and effort invested in this important step will pay off in the long run by reducing overall design time and development costs, improving the overall system performance and robustness, and ultimately reducing production test time and cost. It is tempting to rush through this part of the product development cycle, but experienced sensor system designers will testify to the importance of spending the necessary time and resources to characterize and analyze sensor data before proceeding to the next step of developing an optimized sensor correction algorithm. For example, consider the response curve of sensor S3 in Fig. 1. If the input range is limited to between 10% and 30% or 60% and 90%, a first-order gain and offset correction algorithm might suffice, depending on temperature variations. However, if the sensor must operate across the entire sensor input range, a more sophisticated correction algorithm is needed. Even if the intended range of operation appears to be confined to one of the linear regions, consider what would happen if a future lot of sensors were to shift so that the knee of the curve moved into what was previously a linear region? Not having the flexibility and availability of more sophisticated correction techniques could require significant redesign. It is vitally important for the sensor system designer to understand the characteristics of the sensor across the input measurement range and over the operating temperature range. Some of the more important considerations includethe more important considerations include • The shape and order of the sensor response over the desired measurement range, including at least a 10% margin outside the expected minimum and maximum values. • The type and order of temperature dependence for offset and span. • The consistency of the measured parameters. Consider what would be the effect on the correction algorithm if future manufacturing lots have a shift in a significant feature such as an inflection point or the sign of a temperature coefficient. • Whether the characterization data set is adequate and statistically significant. This includes the number of devices tested and the number of points measured for each. • How much error the data acquisition system contributes to the characterization. • Selecting and implementing the best correction technique production. With the sensor characterization data in hand, the degree and type of correction required for gain and offset can be matched with the best algorithm available in the SSC. Table 1 is a list of the some typical algorithms available in commercial ICs. The algorithms are organized by the type and degree of correction, and the second column indicates how many measurement points are needed to calculate the calibration coefficients for each algorithm. The next columns list the element of correction each calibration method applies and describe the sensor characteristics that must be isolated and quantified to determine the optimal algorithm. TC refers to the temperature coefficient. Eliminate algorithms that correct for negligible effects in the particular system and choose the one with the least number of measurement points. Once the sensors have been characterized and the dataset is evaluated, the next Fig. 4: Screen capture of software aid for selecting and step is to narrow the field evaluating calibration methods. of correction options. The SSC manufacturers usually provide ultimate goal is to produce measurement hardware and software for their devices results that meet sensor product accuracy that allow selecting and evaluating the calrequirements with the minimum number ibration methods quickly and easily. Softof points necessary for calibration during OCTOBER 2013 • electronicproducts.com • ElECTROniC PROduCTs Advertisement Neueste Forschung macht‘s möglich: Schnelle Fehlerbehebung Im Fahrzeug durch DIANA Forschungsprojekt by ZMDI, Infineon, Continental, Audi August 5th, 2013 on www.infineon.com Neueste Forschung macht’s möglich: Schnelle Fehlerbehebung im Fahrzeug durch ... Seite 1 von 2 Home> Über Infineon> Presse Neueste Forschung macht’s möglich: Schnelle Fehlerbehebung im Fahrzeug durch DIANAForschungsprojekt Presseinformation der Projektpartner des deutschen Forschungsprojekts "DIANA": AUDI, Continental, Infineon Technologies, ZMDI Wirtschaftspresse 5. August 2013 Neubiberg, 5. August 2013 – Ab 2015 könnten sich die Werkstattaufenthalte für Fahrzeuge beträchtlich verkürzen. Möglich wird dies durch die gemeinsame Forschungsarbeit der Unternehmen AUDI, Continental, Infineon Technologies und ZMDI. Im Projekt DIANA haben sie daran geforscht, wie sich die Analyse- und Diagnosefähigkeiten in elektronischen Steuergeräten im Fahrzeug verbessern lassen. In dreijähriger Arbeit sind unter der Leitung von Infineon Verfahren entstanden, mit denen eine differenzierte Fehlererkennung und damit die schnellere Fehlerbehebung in der Werkstatt möglich werden. Mit Unterstützung von Forschungseinrichtungen und Universitäten wurde der Weg bereitet für die "Durchgängige Diagnosefähigkeit in Halbleiterbauelementen und übergeordneten Systemen zur Analyse von permanenten und sporadischen Fehlern im Gesamtsystem Automobil" (DIANA). Die Fahrzeugelektronik ist heute überaus komplex. Durchschnittlich 80 elektronische Steuergeräte gibt es im Auto; im Premiumfahrzeug können es hundert und mehr sein. Erfahrungsgemäß ist in der Fahrzeugelektronik die eigentliche Ursache vieler gemeldeter Fehler nicht einwandfrei feststellbar. Häufig blieb der Werkstatt nur die Möglichkeit, einen Fehler anhand der Fehlerbeschreibung systematisch durch Austausch von Systemkomponenten einzugrenzen und so zu beheben. Auf Basis der in DIANA erarbeiteten Verfahren werden sich Elektronikstörungen im Automobil in Zukunft schneller und deutlich effizienter erkennen und beheben lassen. Entscheidende Grundlage hierfür sind Verfahren der Qualitätskontrolle aus der Produktion der Halbleiterindustrie. Diese Verfahren wurden von den DIANAForschungspartnern so weiterentwickelt, dass die im Fahrzeug verbauten Chips unmittelbar für die Eigendiagnose des Fahrzeugs genutzt werden können. Als Ergebnis daraus können sich vor und während der Fahrt auch die elektronischen Steuergeräte des Fahrzeugs fortlaufend selbst überprüfen. Auf Basis dieser in kontinuierlicher Eigendiagnose gewonnenen Daten lassen sich Fehlfunktionen frühzeitig erkennen, denn die Diagnosedaten werden vorverarbeitet an übergeordnete Systemkomponenten des Steuergeräts übergeben. Davon profitieren die Mechatroniker in der Werkstatt dann bei der Fehlerdiagnose. Eine solche durchgängige Diagnosefähigkeit im Fahrzeug ist erst durch die konzertierten Forschungs- und Entwicklungsarbeiten der DIANA-Forschungspartner realisierbar geworden. Bewähren sich die Diagnosetechniken im Fahrzeug, bieten sich weitere sicherheitsrelevante Anwendungsfelder an, z.B. in anderen Verkehrssystemen wie Bahn oder Flugzeug oder auch in der Medizintechnik. Das Projekt DIANA wurde vom Bundesministerium für Bildung und Forschung (BMBF) im Rahmen der HightechStrategie der Bundesregierung und des Programms "Informations- und Kommunikationstechnologie 2020" (IKT 2020) mit einem Beitrag von etwa 4,8 Millionen Euro gefördert. Schwerpunkte des IKT 2020-Programmes sind unter anderem Automobil und Mobilität; Ziel ist es, die Robustheit der Fahrzeugelektronik maßgeblich zu verbessern. Weitere Projektbeteiligte Unterstützt wurden die vier Projektpartner von dem Fraunhofer-Institut für Integrierte Schaltungen Dresden, der Universität der Bundeswehr München und den Universitäten Cottbus, Erlangen-Nürnberg und Stuttgart. Über AUDI Die AUDI AG hat als Automobilhersteller im Premiumsegment im Jahr 2012 weltweit 1.455.123 Automobile an Kunden ausgeliefert. Das Unternehmen entwickelt und produziert in Deutschland an den Standorten Ingolstadt und Neckarsulm sowie an acht weiteren Auslandsstandorten. Mit derzeit mehr als 70.000 Mitarbeitern erzielte der AUDI Konzern, zu dem auch die Marken Lamborghini und Ducati zählen, im vergangenen Jahr einen Umsatz von 48,8 Milliarden Euro. http://www.infineon.com/cms/de/corporate/press/news/releases/2013/INFXX201308-... 15.10.2013 Neueste Forschung macht’s möglich: Schnelle Fehlerbehebung im Fahrzeug durch ... Seite 2 von 2 Über Continental Continental gehört mit einem Umsatz von 32,7 Milliarden Euro im Jahr 2012 weltweit zu den führenden Automobilzulieferern. Als Anbieter von Bremssystemen, Systemen und Komponenten für Antriebe und Fahrwerk, Instrumentierung, Infotainment-Lösungen, Fahrzeugelektronik, Reifen und technischen Elastomerprodukten trägt Continental zu mehr Fahrsicherheit und zum globalen Klimaschutz bei. Continental ist darüber hinaus ein kompetenter Partner in der vernetzten, automobilen Kommunikation. Continental beschäftigt derzeit rund 173.000 Mitarbeiter in 46 Ländern. Weitere Informationen unter www.continental-corporation.com. Über ZMDI Die Zentrum Mikroelektronik Dresden AG (ZMDI) ist ein weltweiter Anbieter von Analog- und Mixed-SignalHalbleiterlösungen für Automobil-, Industrie-, Medizin-, Mobile Sensing-, IT- und Verbraucheranwendungen. Diese Lösungen ermöglichen unseren Kunden, Produkte im Bereich Power Management, Beleuchtung und Sensoren zu entwickeln, die für ein Höchstmaß an Energieeffizienz sorgen. Seit mehr als 50 Jahren befindet sich der Hauptsitz von ZMDI in Dresden. Mit mehr als 350 MitarbeiterInnen weltweit betreut ZMDI seine Kunden mit Verkaufsstellen und Entwicklungscentern in Deutschland, Italien, Bulgarien, Frankreich, Irland, Japan, Korea, Taiwan und den Vereinigten Staaten. Weitere Informationen unter www.zmdi.com. Pressekontakte: AUDI AG Armin Götz Kommunikation Produkt / Technik Telefon: +49 (841) 89-90703 E-Mail: [email protected] Continental AG Simone Geldhäuser Externe Kommunikation Division Powertrain Telefon: +49 (941) 790-61302 E-Mail: [email protected] ZMDI Freda von Kopp Marcom Creative Manager Corporate Marketing and Communications Telefon: +49 (351) 8822-204 E-Mail: [email protected] Über Infineon Die Infineon Technologies AG bietet Halbleiter- und Systemlösungen an, die drei zentrale Herausforderungen der modernen Gesellschaft adressieren: Energieeffizienz, Mobilität sowie Sicherheit. Mit weltweit rund 26.700 Mitarbeiterinnen und Mitarbeitern erzielte Infineon im Geschäftsjahr 2012 (Ende September) einen Umsatz von 3,9 Milliarden Euro. Das Unternehmen ist in Frankfurt unter dem Symbol "IFX" und in den USA im Freiverkehrsmarkt OTCQX International Premier unter dem Symbol "IFNNY" notiert. Informationsnummer INFXX201308.059 • Support Bezugsquellen für Infineon-Produkte Bitte verwenden Sie unseren Location Finder, um einen Infineon Distributor oder ein Infineon Sales Office zu finden. • Location Finder Explore our Focus Areas Energy Efficiency Mobility Security © 1999 - 2013 Infineon Technologies AG - Benutzung der Webseite unterliegt unseren Nutzungsbedingungen - Impressum - Kontakt Datenschutzbestimmungen - Glossar © 1999 - 2013 Infineon Technologies AG http://www.infineon.com/cms/de/corporate/press/news/releases/2013/INFXX201308-... 15.10.2013 Unter der Haut by Dr. Marko Mailand (ZMDI) October 2013 in electronik JOURNAL (Germany) Aktive Bauelemente ASIC Unter der Haut NFC- und Sensor-Komponenten auf einem Chip zur In-Vivo-Blutanalyse Spezifische Kommunikations- und Sensortechnologien mit modernsten biochemischen Lösungen kombiniert: Mit diesem Halbleiter adressiert ZMDI die kontinuierliche telemedizinische Überwachung von Blutparametern. So sollen zum Beispiel Diabetes-Patienten mehr über ihren Blutzuckerspiegel erfahren, ohne sich Blut zu entnehmen. Autor: Dr. Marko Mailand M it mehreren Millionen registrierten Erkrankungen ist Diabetes heute eine Volkskrankheit und eine der wesentlichen Ursachen für zahlreiche Kreislauferkrankungen. Medizinisch unterscheidet man zwischen Patienten, bei denen die Bauspeicheldrüse kein Insulin produziert (Diabetes Typ-1) und Betroffenen, bei denen der Körper eine Resistenz gegen Insulin zeigt (Diabetes Typ-2). Insbesondere die Typ1-Diabetes erfordert eine möglichst kontinuierliche Überwachung des Blutzuckerspiegels. Bild 1: Der Fluoreszenz-Sensor von Senseonics misst nur 15 mm x 3 mm; er dient als Basis für ein implantierbares Glukosemesssystem. Bild: fotolia: Kurhan 28 Zur Lösung dieses Problems hat das Unternehmen Senseonics einen Fluoreszenz-Sensor entwickelt, der die Basis für ein implantierbares, kontinuierliches Glukosemesssystem bildet. Das neuartige Sensor-Systemkonzept ist neben der kontinuierlichen Glukosemessung auch auf eine ganze Reihe weiterer Anwendungen adaptierbar. Die Elektronik des ambulant implantierbaren (in-vivo) Sensor-Moduls (Bild 1) ist in einem speziell für Senseonics entwickelten ASIC von ZMDI integriert. Das elektronische Systemkonzept basiert auf der Nutzung ISO-kompatibler Nahfeld-Kommuni- elektronikJOURNAL 05/2013 www.elektronikjournal.com Aktive Bauelemente ASIC Bilder: ZMDI Bild 2: Das Prinzipbild der Funktionsweise des In-Vivo-Glukose-Biosensors zeigt, dass der Sensor per NFC mit Mobilgeräten kommuniziert. Bild 3: Sensor-System-Topologie: Der aktive Reader (links, NFCMaster) versorgt und kontrolliert den passiven NFC-Sensor-Transponder. kation (NFC) und -Energieversorgung (ISO15693, zukünftig auch ISO14443-3) mittels loser, induktiver Kopplung. Befehle (zum Beispiel Messen, Daten speichern, Daten lesen, Diagnose), Daten und Energie werden dabei drahtlos vom NFC-Master zum implantierten Sensor-Modul, dem NFC-Sensor-Transponder, übertragen (Bild 2). Letzterer steuert den Ablauf, führt die jeweiligen Messaufgaben durch und sendet die Daten zurück an den NFC-Master, welcher die einzelnen Messwerte zum Beispiel in einen Glukosewert umrechnet. Der NFC-Master kann beispielsweise als ein spezifisches Armbandgerät ausgeführt oder auch direkt in einem Smartphone integriert sein. Durch die Kombination von Wireless-NFC-Technologie mit einem optischen Signalübertragungsweg für die Bestimmung der Blutparameter – speziell der Glukosekonzentration – wird aufbautechnisch eine komplette Verkapselung des implantierbaren NFCSensor-Transponders möglich. Da nun aber auch die Energieversorgung drahtlos geschieht und das Sensor-Modul folglich batterielos agiert, ist die Lebensdauer nur noch durch inhärente Sensoreigenschaften begrenzt – das ist im Wesentlichen das Nachlassen der Fluoreszenzintensität des biochemischen Indikators, der sich auf der Außenseite des Sensors befindet. NFC/RFID-Kompatibilität Wesentliche Anwendungsvorteile für die Patienten resultieren aus der Integration von ISO-standardkompatiblen Kommunikationsund Power-Management-Komponenten. Die aktuelle ASIC-Version implementiert ein ISO15693-Transponderinterface. Das analoge ISO-Frontend nutzt einfache Amplitudendetektion zur Demodulation und ein steuerbares Lastverhalten mittels einer ClampSchaltung zur passiven Rückmodulation. Bei Letzterem wird das Magnetfeld mit der Modulationsfrequenz von ungefähr 423,75 kHz entsprechend gedämpft; diese Dämpfung detektiert dann der www.elektronikjournal.com Reader oder NFC-Master. Der Anwendungsvorteil der ASIC-Realisierung als passiver Transponder liegt auf der Hand: im SensorModul wird keine Batterie benötigt, und es besteht damit keine Einschränkung der Lebenszeit und In-Vivo-Verbleibedauer aufgrund von Energieversorgungseigenschaften. Störungen vermeiden Die besondere Herausforderung besteht nun darin, zu verhindern, dass der Reader jedes Last-Schaltverhalten der digitalen und analogen Baugruppen, insbesondere des Sensor-Teils, als Rückmodulation fehlinterpretiert (Bild 3). Zusätzlich muss gewährleistet sein, dass für eine Sensor-Messung oder einen Messzyklus ausreichend Energie zur Verfügung steht. Die größten Stromverbraucher des ASICs sind der Analog-Digital-Wandler sowie der LED-Treiber beziehungsweise die LED an sich. Diese brauchen etwa 0,35 mA bei einer intern geregelten Spannung von 2,8 V (ADC) oder bis zu 2 mA bei der ungeregelten Betriebsspannung Vsup ~ 4 V (LED) entsprechend der Topologie in Bild 3. Um trotz dieser notwendigen Lastunterschiede keine unerwünschten Frequenzanteile in der Luftschnittstelle zu generieren Auf einen Blick Alles drin Ein besonderer ASIC von ZMDI nutzt NFC zur Kommunikation und Energieversorgung, kombiniert mit dem Treiber für eine UV-LED und den entsprechenden Photo-Sensoren sowie der Signalaufbereitung und -Verarbeitung. Mit diesem Chip hat Senseonics eine implantierbare Lösung zum Messen von Blutparametern entwickelt. infoDIREKT www.all-electronics.de 505ejl0513 elektronikJOURNAL 05/2013 29 Aktive Bauelemente ASIC Tabelle: Optionen und Eigenschaften des Multi-Sensor-Front-Ends Mess-/Sensor-Typ Messbereich Maximale Empfindlichkeit Fotodioden-Strom 1,16 µA 4,5 pA/count Temperatur +15 … +50 °C 18 mK/count Externe Spannung -1,5 … +1,5 V 1,2 mV/count Feldstärke (Iclamp) 140 mW 10 µW/count Spannung: LED-Treiber 1,6 V 1 mV/count Diagnose – Optik 1,16 µA 4,5 pA/count Diagnose – Temperatur +15 … +50 °C 18 mK/count Im ASIC ermitteln eine ganze Reihe an Sensoren wichtige Daten über den Patienten. Quelle: ZMDI. Bild 4: Die Systemsteuerung und der Messablauf ermöglichen bis zu acht Messungen pro Zyklus. sind im ASIC speziell geformte, stetige Ein/Ausschalt-Rampen in der Power-Management-Einheit integriert. Dadurch werden die Spektralanteile, verursacht durch das Schalten, in einen Bereich um die 400 kHz verschoben – das relevante Passband liegt aber bei 13,56 MHz ±1 MHz. Das Datensignal wird somit nicht gestört. als ausreichend ist. Das ermöglicht eine Situations-optimale, energieeffiziente Systemauslegung der gesamten Applikation (NFC/ Sensor-Transponder in Zusammenspiel mit NFC-Master). Die Tabelle zeigt die entsprechenden Dynamikbereiche und Empfindlichkeiten der integrierten Sensoren. On-Chip-Sensorik Adaptierbarkeit durch digitale Steuerung Das Hauptmessprinzip zur Ermittlung der Glukosekonzentration nutzt zwei optische Kanäle. Eine vom ASIC gespeiste UV-LED emittiert Licht, welches von der Kapseloberfläche zurückgeworfen wird. Ein Spektralanteil besteht dabei genau aus dem emittieren UV-Licht und beinhaltet keinerlei Nutzinformation. Der Hauptspektralanteil jedoch, resultiert aus der Fluoreszenz der Indikatorchemikalie an der Außenseite des Sensormoduls. Hierbei werden genau nur jene Indikatoren zur Fluoreszenz angeregt, an welche sich Glukosemolekühle gekoppelt haben (Bild 2). Dabei gilt, dass die Intensität der Fluoreszenz direkt von der Konzentration der Glukose abhängt. Beide Spektralanteile (UV-Reflektion und Signal-Fluoreszenz) werden von spektral selektiven OnChip-Fotodioden detektiert und im ASIC analog aufbereitet und digitalisiert. Alle biochemischen Prozesse sind temperaturabhängig. Zur Kompensation dieses Einflusses ist im ASIC ein hochgenauer Temperatursensor integriert. Über diesen kann auf weniger als 0,2 K genau die tatsächliche Temperatur des Sensor-Moduls und des umgebenden Gewebes bestimmt werden. Die digitale Steuerung der einzelnen Sensorkanäle erlaubt bis zu acht unterschiedliche Messungen pro Messzyklus. Ein Messzyklus ist dabei die tatsächliche Reaktion des NFC/Sensor-Transponders auf einen einzelnen Messbefehl des NFC-Masters. Je nach den gewünschten Informationen sowie der dafür notwendigen Messabfolge werden in einem Zyklus Messungen mit und ohne emittierender LED durchgeführt (Bild 4), die Einzelwerte in On-ChipRegistern zwischengespeichert und nach Beendigung aller Messungen die gesammelten Ergebnisse über die NFC-Schnittstelle übermittelt. Die einzige Begrenzung liegt dabei darin, dass der NFC-Master entsprechend der ISO-Standards eine Antwortzeit von maximal 20 ms zulässt. Ein Messzyklus inklusive Setup und Antwort muss somit innerhalb dieser Zeit geschehen, um kein NoResponse-Timeout-Ereignis auszulösen. Die Auswertung und Interpretation der einzelnen Sensor-Messwerte geschieht dann softwarebasiert auf der NFC-Master-Seite. Die freie Konfigurierbarkeit des Messzyklus’ ermöglicht die Anwendung des ASICs und seiner Einzelsensoren in verschiedenen Applikationen. So sind neben der Glukosemessung beispielsweise auch Messungen für Blutsauerstoff, Blutalkohol und vieles mehr denkbar. Hierfür kann das elektronische Sensor-System einfach angepasst werden – es bedarf dafür aber anderer biochemischer Indikatoren. Selbstdiagnose Darüber hinaus sind im ASIC mehrere Eigendiagnostik- und Applikations-Status-Sensoren integriert. Bei der Eigendiagnostik werden dem Temperaturmesspfad oder dem optischen Messpfad (über die Fotodioden) vordefinierte Schaltungsoffsets hinzugefügt, die zu einer bekannten, erwarteten Änderung des Analog/DigitalWandler-Ergebnisses im Verhältnis zur entsprechenden Nicht-Diagnostik-Messung führen müssen. Dadurch lassen sich eventuelle ASIC-interne Alterungs- oder Drift-Effekte auch im implantierten Zustand des Sensors erkennen. Der On-Chip-Statussensor zur Messung der aktuell verfügbaren Feldstärke ermöglicht es, dem Patienten mitzuteilen, ob die Kopplung, sprich die Lage des NFC-Masters relativ zum Sensor-Modul, ausreicht oder verbessert werden muss, um genügend Energie für den Betrieb zu übertragen. Auf diesem Weg kann der Sonsor den NFC-Master auch informieren, wenn die Übertragungs- oder Sendeenergie sinken darf – falls die induktive Kopplung gerade mehr 30 elektronikJOURNAL 05/2013 Im Test Derzeit befindet sich das erste Gesamtsystem von Senseonics zur Glukosemessung in der klinischen Erprobung in den USA, Kanada, Großbritannien, Deutschland und Indien. Die Entwicklung dieses Systems und des zugrunde liegenden NFC/Sensor-Transponder-ASICs von ZMDI ist dabei ein erster Schritt auf dem Weg zu vollständig autonomen, robusten telemedizinischen und klinischen Anwendungen, die sich vollständig in den normalen Alltag integrieren lassen. (lei) n Der Autor: Dr. Marko Mailand ist Projekt Manager für MixedSignal-IC-Entwicklung im Bereich Medical, Consumer und Industrial bei ZMDI in Dresden. www.elektronikjournal.com Designing an ASIC Chip to Control an Implantable Glucose Measurement Device by Uwe Günther (ZMDI); Andrew DeHennis (Senseonics, Inc.) November 2013 in Medical Design Briefs (U.S.) Sensor module design improves automotive electrical integration by Torsten Herz (ZMDI) 2014 in 21ic. eBooks (online publication) in Asia 传感器模块设计促进汽车电子的集成 作者:ZMDI 公司,Torsten Herz 得益于最新的基础传感器的控制系统所提供的精确、实时的监测,新汽车引擎的工作效率更高,对环境的影响更小。 这种性能改善的一项结果是,车辆中传感器应用的数量在过去几年中突破了两位数的增长。另一项结果是,在车辆中增加 更多的传感器模块成为一项趋势。这些模块必须可靠、强韧,必须能够在恶劣的物理、化学和电气压力条件下长期稳定工 作,并具有高精度。 此外,汽车传感器模块还需要一系列内置的诊断功能,以支持汽车 OEM 厂商“按需维护”的政策,以及安全攸关的 传感器应用(比如刹车压力传感)所需的特殊故障模式操作。 对于传感器模块而言,耐化学性(即对介质、湿气和腐蚀的免疫性)和物理强韧性(例如耐冲击和振动)主要取决于 所采用的材料以及组装和连接技术。电气强韧性,即电磁兼容性(EMC),取决于应用电路、电子元器件(集成电路,分 立器件)以及应用电路中的电气连接的布局走线。 本文将描述汽车传感器模块于电气强韧性方面的设计与应用。采用 ZSC31150 传感器信号调理器(SSC 集成电路)能 够设计出高精确度的传感器模块,其不仅能够在-40 至+150°C 的温度条件下工作,而且能够提供更好的 EMC 性能以及一系 列保护和诊断功能,以用于处理 SIL2/ASIL-B 等级的关乎安全的应用。采用传感器模块的智能化电气设计,将所有 EMC 相关参数考虑在内(即,寄生电容和电感),可以在最优的模块成本下实现高度的电气强韧性和内置的诊断功能,以及对 被测信号的极高精度测量。 因为传感器系统和处理单元之间的机械设计和互连对其电磁行为有着重要的影响,所以针对嵌入式传感功能(ESF) 和独立传感器模块(SASEM)使用不同的方法是至关重要的。 就 ESF 而言,传感器电子的位置靠近处理单元——在汽车应用中就是电子控制单元(ECU)。ESF 和 ECU 之间的连 接通常非常短(<<30cm),一般以印制电路板(PCB)上的走线来实现。现代 ESF 都提供了数字接口,例如串行外设接口 (SPITM,微芯科技的商标),其连接到 ECU 的微控制器。因为在同一 PCB 上且距离较近,因此有几种选择可供满足汽车 中严格的 EMC 要求(即,屏蔽或使用外部保护器件)。ESF 的一个例子就是气压传感。 对 SASEM 而言,配置是完全不同的。它们往往通过最长可达 2.5 米的无屏蔽线束连接到 ECU(参见图 1 中的示 例)。模块外壳(金属或塑料材质)内部可用的电路板空间是非常有限的,并趋于进一步的微型化,因为更少的材料耗费 等同于更轻的重量,进而等同于更低的成本。取决于不同的供电方式(电池供电或 ECU 供电),有各种兼容的输出接口: 电池供电的 SASEM • • • • • 脉宽调制(PWM)输出(高边负载) PWM 输出(低边负载) 控制器区域网络(CAN 总线)接口 本地互联网络(LIN 总线)接口 纯粹的模拟电压输出 ECU 供电的 SASEM • • • 比值测量模拟电压输出 SAE J2716 单边半字节传输(SENT)接口(快速、单向的点到点数字数据传输) 外设传感器(PSI5)接口(两线电流编码的数字数据传输) 图 1:汽车压力传感器模块的典型构造 ECU V+ OUT VHarness(1=1.7m) Plug for Electrical Connection Case of the Module Electronic Parts PCB Plug for Pneumatic or Hydraulic Connection Pressure Supply Adaptor(“PSA“) with Sensor System to be Monitored ECU V+ OUT V线束(I=1.7 米) 电气连接插头 模块外壳 电子器件 印制电路板(PCB) 气动或液压连接插头 带传感器的压力适配器(PSA) 待监测系统 对客车而言,使用 ECU 供电的 SASEM 来提供比值测量模拟电压输出这种方式仍然很常见。常见的供电电压大约是直 流 5V±10%,而单个 SASEM 总的电流消耗应当≤10mA。如前所述,外壳的工作条件相当恶劣,这就导致了一些有效的无 源 EMC 保护器件无法使用,比如铁氧体磁珠,它只能工作在最高+125°C 的温度下。 SASEM 的 EMC 要求 取决于模块的不同设计(例如,模块外壳的材料),ZSC31150 的差分输入端 VBP 和 VBN 到 VSSA 之间可能额外需要 两个 10nF(最大)电容(如图 2 绿色部分所示),以满足 SASEM 的 EMC 规范——这就需要我们对有关典型的汽车 EMC 要求加以讨论。 图 2:ZSC31150 汽车应用电路 5VDC Standardized Artificial Network(AN) 1μF 1μh 100nF Application-Specific Test Network VDD VOUT VSS DC BCI Antenna at Varying Positions Harness(1=1.7m) Case of the Module RF GND=chassis IRF_sink IRF_source ZC_GND PCB CS_PSA SSC-IC+ext.caps PSA CV+_C CVOUT_C CV-_C ZPSA_C 直流 5V 标准化人工网络(AN) 1μF 1μh 100nF 专用测试网络 VDD VOUT VSS 直流 各种位置处的 BCI 天线 线束(I=1.7 米) 模块外壳 射频 地=机壳 IRF_sink IRF_source ZC_GND PCB CS_PSA SSC 集成电路+外部电容 PSA CV+_C CVOUT_C CV-_C ZPSA_C 重要注解:强烈推荐在设计模块之前为每种 EMC 测试定制该电路,因为不同的 EMC 测试电路可能会要求不同的模块 设计。“通用的”解决方案往往过于昂贵。 为了满足苛刻的汽车 EMC 要求,必须考虑所有的相关电气寄生参数,特别是电气传感器电路和 SASEM 的其他传导器 件之间的寄生电容,如图 3 所示。模块的结构可能有许多不同的配置,其在汽车内部的组装如表 1 所示。外壳和压力适配 器(PSA)都可以是塑料或金属的,并且二者都可以与底盘有电流接触或没有接触。 模块构造 塑料外壳和塑料 PSA 塑料外壳和金属 PSA 金属外壳和塑料 PSA 金属外壳和金属 PSA 金属外壳和金属 PSA 汽车装配 与汽车底盘没有电流接触 PSA 与汽车底盘之间没有电流接触 PSA 与汽车底盘之间有电流接触 外壳与汽车底盘之间没有电流接触 外壳与汽车底盘之间有电流接触 外壳与汽车底盘或 PSA 之间没有电流接触,PSA 与汽车底盘间没有电 流接触 外壳与汽车底盘或 PSA 之间没有电流接触,PSA 与汽车底盘间有电流 接触 外壳与汽车底盘间没有电流接触,但与 PSA 之间有电流接触,PSA 与 汽车底盘间没有电流接触 外壳与汽车底盘间没有电流接触,与 PSA 之间没有电流接触,而 PSA 与汽车底盘间没有电流接触 外壳与汽车底盘及 PSA 之间有电流接触 配置 1 2 3 4 5 6 7 8 9 10 表 1. 模块构造和汽车装配可能的配置 在表 1 中,就 BCI 测试的等效 RF 电路而言,配置 1 和 10 代表了两个极限。在配置 1 中,所有的寄生阻抗都是最大 值;而在配置 10 中,它们都是最小值或已经短路。 影响 SASEM EMC 的模块特性 第一项考虑因素是 CBCI 天线和线束之间的电磁耦合。如果 RF 电流 IRF 的频率在 RF 发射 CBCI 天线和被测器件 (DUT)之间的线束段的初始谐振频率的范围内,那么感应电流 IRF_sink(见图 3)就是最大的。感应电流值取决于寄生阻 抗,特别是 ZC_GND。 随着 IRF_sink 的增加,其对被测器件的影响也变得更强。最坏的情况(即最大的射频敏感性)是配置 10,因为 ZC_GND = 0 欧姆(外壳和车辆底盘间的电流接触)且 ZPSA_C=0 欧姆(PSA 和外壳间的电流接触)。在这种情况下,IRF_sink 只受限 于被测器件的信号路径 VDD、VOUT 和 VSS 相对于外壳的寄生电容(CV+_C,CVOUT_C,和 CV-_C),以及相对于 PSA 的传 感器桥的寄生电容(CS_PSA)。不过,还有一些其他寄生电容(即相对于外壳的内部信号路径)也可能会降低被测器件的 射频敏感性。 示例 • • • DUT=±40mV(标称值)的模拟输出电压所允许的容差 输入信号的有效增益“G”(在 SSC 集成电路的模拟前端进行调节):G=400V/V 直流桥电阻=4kΩ;在其差分端产生的 AC 桥阻抗=2kΩ 在本示例中,由射频能量所引起的差分桥电压的变化极值是(±40mV/G)=±0.1mV,而所产生的桥分流电流之间差值 的极值是±0.1mV/2kΩ = ± 50nA! 这个非常简单的示例说明了机械结构和所选材料对传感器模块的 EMC 性能的影响。在汽车大批量生产的条件下,考 虑到系统的成本,定义寄生参数要更为困难。 一种有效的设计理念是基于选择不导电材料作为示例压力传感器模块的外壳和压力适配器(PSA)(参考表 1 中的配 置 1)。这种不导电材料确保了 ZPSA_C 和 ZC_GND 取得最大值。但是,为了消除 PCB 的导电结构相对于地的其他寄生电容 的影响,设计必须将模块在车内的装配情况考虑在内。如果被监测系统与其外壳之间的连接也是由不导电材料组成的,那 么寄生阻抗就是最大值。 在 PCB 布局走线时,比较容易确保相对于地的寄生电容 CV+_C,CVOUT_C 和 CV-_C 近乎相等,以使得进入的射频能量 就像是 SASEM 的共模信号。换句话说,在 SASEM 处,没有合适的射频地可用,这使得在被测的宽谱频率范围内阻拦该 射频能量(例如,通过电容)变得几乎不可能。此外,电容不是“理想”元件——它们内部也有寄生参数——特别是其串 联电感(ESL),它决定了电容开始像电感那样起作用的频率极限。典型的 0805 封装 MLCC-X8R 电容的 ESL 是 1~1.5nH。只有通过高的共模抑制比(CMRR),才能实现对所加射频能量的高抗扰性。ZSC31150 可以通过配置保证这一 点。 如果 SASEM 外壳和 PSA 需要用导电材料,所产生的寄生阻抗会更小,而感应射频电流会更大。因为 SASEM 中不同 元器件(即本示例中的外壳、PCB 和 PSA)间存在机械容差,所以在大批量汽车生产的制造条件下,很难确定这些寄生参 数。 这个问题的一种解决方案是,为感应射频电流设计一条从线束到地的通道,它需要靠近传感器系统的信号路径并具有 极低的阻抗。模块的导电外壳可以提供这种通道,并为模块的 PCB 提供针对 GHz 范围内辐射射频信号的屏蔽。采用这种 结构理念的另一优势在于,SASEM 在车内装配的条件无法降低其电磁抗扰性,因为采用这种设计考虑了最坏的情况(参见 表 1 中的配置 10)。 SASEM 的负电源电压与底盘之间不允许有电流接触。在外壳与地之间增加一个具有足够高的额定工作直流电压和承受 高瞬态电压的电容即可以为射频电流提供这样的一条路径。这一解决方案的巨大缺点是必须使用到金属外壳(例如,铝质 外壳)的强韧性以提供长期稳定的电气连接,从而增加了成本。此外,外壳与地之间的电容必须指定相对较高的电压(例 如 500 或 1000V),从而导致更大的封装和更高的成本。 上面讨论的隔直电容(连接在 VDD-VSS 和 VOUT-VSS 之间)的 ESL 以及所导致的对有效频率范围的限制也需要加以 考虑。同样,备选方案是创建一种 PCB 布局走线,针对到 SSC 集成电路的敏感的传感器信号线进行优化,以便通过浮动 的金属外壳为感应射频能量提供高的射频对称性和完全相同的阻抗。这会使得进入的射频能量就像是 SASEM 的共模信号 一样。针对于金属外壳直接放电的 ESD 所需的强韧性(典型要求:自 SASEM 的地算起±15kV)可以通过将应用电路绘制 在 PCB 上以及与金属外壳间的其他适当的隔离来实现。 通过最坏情况下大电流注入测试的模块设计 图 4 演示了基于表 1 中的配置 10,在不使用外壳到地电容的情况下,所生成的面向共模大电流注入(CBCI)的电路。 图 4:基于配置 10 经过优化的传感器模块的 CBCI 测试电路和 EMC 电路 5VDC Standardized Artificial Network(AN) 1μF 1μh 100Nf(typ.) Application-Specific Test Network VDD VOUT VSS DC BCI Antenna at Varying Positions Harness(1=1.7m) Case of the Module RF GND=chassis IRF_sink PCB CS_PSA ZSC31150+ext.caps PSA CV+_C CVOUT_C CV-_C 直流 5V 标准化人工网络(AN) 1μF 1μh 100Nf(典型值) 专用测试网络 VDD VOUT VSS 直流 位置变化的 BCI 天线 线束(I=1.7 米) 模块外壳 射频 地=底盘 IRF_sink PCB CS_PSA ZSC31150+外部电容 PSA CV+_C CVOUT_C CV-_C 通过图 4 中 PCB 上所示的 3~5 个外部电容,可以用配置 10 来实现合适的 EMC 性能。在“最佳情况”条件 1 中,只需 C1、C2 和 C5 即可满足 EMC 要求。这些电容也能够显著降低模块引脚处的 ESD 峰值电压,因此通过>4kV(例如 8kV ESD)电压的 ESD 测试成为可能。在“最坏情况”配置 10 中,除了 C1、C2、C3、C4 和 C5 之外,可能还需要进行合理的 PCB 布局走线并采用与金属外壳间的合适隔离。 PCB 布局走线对于降低传感器模块的电磁敏感性是非常重要的。电容 C1 和 C2 必须尽可能地靠近线束的末端,而到 SSC 集成电路引脚的走线应该具有几乎完全相等的尺寸。强烈推荐所有连接传感器元件和 SSC 集成电路输入的 PCB 走线 都应当越短越好,越相近越好。电容 C5(如果需要,还有 C3 和 C4)必须放置得尽可能地靠近 SSC 集成电路引脚。所有 这些建议都能够帮助优化 PCB 布局走线的射频共模特性,以实现射频拒绝(RF rejection)。 符合 GMW3097 汽车标准的模块设计 一项严格且有关暂态强韧度的 EMC 标准示例是通用汽车的 GMW3097 标准。这项汽车规范要求系统能防护以电容方 式耦合到供电线上的高达 85V 的暂态脉冲,后者会产生串扰行为。这项规范要求在测试时用 100nF 的耦合电容以串联方式 连接到“尖峰发生器”,后者所产生的一系列 10 个 85V 脉冲会被传输到 SASEM 的线束上。在测试中,SASEM 允许有一 项或多项功能超过规范的极限值;但是一旦测试完成,这些功能必须回到规范。 应用要求规定了为满足GMW3097 规范需要哪些I/O线路。在下面的示例中,由等式 2 确定,我们选择了在图 5 的测试 线路中所示的电路元件值,以便在供电线路(VCC)上满足 85V规范。要在ZSC31150设计中满足这项要求,需要如下的元 器件,因为它的供电引脚VDDE和VSSE以及模拟输出引脚OUT都指定了最大±33VDC的直流过压: C1 = 220 nF C2 = 47 nF(推荐值;参考 ZSC31150 的数据手册) C3 = 100 nF(推荐值;参考 ZSC31150 的数据手册) C4 = 100 nF (2) (2) 其中,在尖峰产生过程中 V1 = 85V, (3) 在尖峰产生过程中 图 5:示例 GMW3097 EMC 测试设置 Sensor Module VDDA VDDE VSSA ZSC31150 VSSE VCC OUT GND Application under Test VCC GND Spike Generator (Impedance ≤2Ω ) 面向满足诊断需求的模块设计 传感器模块 VDDA VDDE VSSA ZSC31150 VSSE VCC OUT 地 被测应用 VCC 地 尖峰发生器(阻抗≤2Ω) 为了遵从汽车 OEM 厂商“按需维修”的政策,需要 SASEM 提供一系列内置诊断功能,以检验传感器元件,到 ECU 的连接以及内部电子线路的正常工作(特别是 SSC 集成电路)。除了 SSC 集成电路中由其架构所决定的内部诊断以外, “按需维修”还需要一些常见的诊断功能。 以下诊断功能与传感器元件有关: • • 传感器连接检查(考虑走线的短路和断路) 传感器老化检测 对于 SASEM 和 ECU 之间断掉的线束电线的检测,有两种重要的情况: • • 电源缺失;即 VCC 线存在断路 地缺失;即地线存在断路 对于这两种情况,必须要确保 ECU 可以检测到输出信号线的这些故障状况。根据 ECU 设置的不同,可能会要求输出 信号通过连接到信号线的 ECU 负载电阻驱动到诊断故障带(DFB)。为启用此设置,在这两种故障情况下,SASEM 的信 号输出必须被驱动到高阻态/低漏电流态。例如,当出现电源/地缺失时,规定 ZSC31150 的输出其输出漏电流在+150°C 时≤ ±25µA(在+125°C 下≤ ±12.5µA)。 最佳的独立传感器模块设计 对于 ECU 供电的独立传感器模块,如果其接口采用电阻性的传感器元件并提供比值测量模拟电压输出,那么汽车 OEM 应用的所有技术和商业要求都可以通过深思熟虑的模块设计(特别是 PCB 布局走线)来得以满足,这时需要考虑无 源元器件的真实特性以及所有相关的寄生参数,并利用高性能的 SSC 集成电路(例如 ZSC31150)。 Torsten Herz 是 ZMDI 公司全球现场应用工程组经理 Mehr Power für Pioniere! Article and interview with Steffen Wollek (ZMDI) March 2014 in Deutsche Bank_results (Germany), print and digital 18 Finanzierung_Innovationen Deutsche Bank_r e s u l t s Mehr Power für Pioniere! Öffentliche Fördermittel erleichtern Mittelständlern die Produktentwicklung. Die Hausbank sorgt dafür, dass alles klappt. Angst vor Bürokratie ist dabei unbegründet FOTOS: CORBIS, FOTOLIA E rfolg ist für den Dresdner Halbleiterhersteller ZMDI auch eine Frage der Geschwindigkeit. Im Wettbewerb mit den globalen Chip-Giganten sucht das Management des Mittelständlers unentwegt nach aussichtsreichen speziellen, innovativen Marktsegmenten. „Dort wollen wir schneller sein als die Großen und diesen Vorsprung gewinnbringend nutzen“, sagt Finanzvorstand Steffen Wollek. Wichtigstes Produkt von ZMDI sind energieeffiziente und stromsparende Chips für Sensoren in der Fahrzeugindustrie, der Medizintechnik sowie im Industrie- und Consumerbereich. Zu den Abnehmern zählen führende Unternehmen wie ZF, Continental, Braun, Festo und Casio. Mit hohem Einsatz arbeitet das Unternehmen daran, seinen Vorsprung zu verteidigen. 30 Prozent des Umsatzes von zuletzt rund 60 Millionen Euro fließen in die Entwicklung – deutlich mehr als bei Thesen Der Staat fördert: Bei der F&E-Finanzierung stoßen kleinere Unternehmen schnell an Grenzen. Deshalb bieten EU, Bund und Länder eine Fülle von Förderprogrammen. Die Bank hilft: Viele Unternehmen schöpfen diese Möglichkeiten nicht aus. Dabei kann gerade die Bank ihnen helfen, ohne großen Aufwand an Förderung zu kommen. Die Bürokratie schrumpft: Ein neues Programm des Europäischen Investitionsfonds, das die Deutsche Bank als Erste seit Anfang vorigen Jahres vermittelt, kommt gezielt Unternehmen mit weniger als 500 Beschäftigten zugute. der Konkurrenz, die laut Wollek im Schnitt 18 bis 20 Prozent investiert. Dabei ist Geduld gefragt. „Die meisten Projekte sind langfristig“, sagt Wollek. „Vom Beginn der Entwicklung bis zur Erzielung der ersten Umsätze dauert es mindestens zwei Jahre, und die Innovationszyklen werden immer kürzer.“ Für einen langen Atem sorgen nun unter anderem staatliche Fördermittel. Ob Bund, Land oder EU – Unterstützung durch die öffentliche Hand trägt entscheidend dazu bei, dass der Mittelstand seine Innovationskraft voll entfalten kann. „Bei der Finanzierung von Forschung und Entwicklung mit Eigenmitteln stoßen vor allem kleine Unternehmen an Grenzen“ – so lautet das Fazit einer Studie des Deutschen Instituts für Wirtschaftsforschung (DIW). Allein über Kredite lassen sich die F&E-Kosten aber häufig nicht finanzieren, da deren Erfolg schwer prognostizierbar ist. Die Lücke schließen dann die öffentlichen Programme als „wichtige zusätzliche Finanzierungsquelle“. Die Praxis zeigt jedoch, dass gerade mittelständische Firmen die Möglichkeiten oft nicht nutzen – weil sie diese entweder nicht kennen oder bürokratische Hindernisse fürchten. „Dabei bietet Innovationsförderung die Chance, die F&E finanziell zu beschleunigen und bei Erfolg die Wettbewerbsfähigkeit des Unternehmens entscheidend zu stärken“, sagt Sabine Tieves, Leiterin für Öffentliche Fördermittel bei der Deutschen Bank. Keine Einzelnachweise mehr nötig Bei der Suche nach passenden Fördertöpfen kann die Hausbank die Rolle des Wegweisers übernehmen. So war es im Fall von ZMDI. Seit dem vergangenen Jahr setzt das Unternehmen Mittel der Europäischen Investitionsbank (EIB) ein, und zusätzlich unterstützt der Europäische Deutsche Bank_r e s u l t s Video Finanzierung_Innovationen 19 20 Finanzierung_Innovationen Deutsche Bank_r e s u l t s FOTOS: ZMDI (2) „Das unbesicherte KfW-Darlehen ist Eigenkapital auf Zeit“ ZMDI: Europa hilft forschen Für den Sensorspezialisten ZMDI sind Fördermittel ein wichtiger Baustein der Innovationsstrategie. 30 Prozent des Umsatzes investieren die Dresdner in F&E. Auch die Programme der staatlichen Förderbank KfW hat Finanzvorstand Steffen Wollek (Foto) im Blick. Aktuell setzt ZMDI auf Mittel der Europäischen Investitionsbank sowie des Europäischen Investitionsfonds, der innovative Mittelständler mit bis zu 500 Mitarbeitern mit Garan- Entwicklung mit Staatshilfe FOTOS: FRIEDOLA TECH (2) tien unterstützt. „Vor allem die langfristige Ausrichtung hat uns überzeugt“, sagt Wollek. Friedola Tech: Recycling macht stark Beim Kunststoffhersteller Friedola Tech wird schon in der Entwicklung dafür gesorgt, dass auch das Recycling optimal funktioniert. „Wir verstehen uns als Greentech-Unternehmen“, sagt der kaufmännische Leiter Werner Eisenhardt (Foto). Logistiker und Autohersteller sind wichtige Kunden der Thüringer – sie legen viel Wert auf Leichtbau, um die CO2-Bilanz zu senken. Auch darauf muss Friedola Tech bei Innovationen achten. Unterstützt von der Europäischen Investitionsbank entstand eine neue, leistungsfähige Laminieranlage. Investitionsfonds (EIF) die Finanzierung mit einer 50-Prozent-Garantie. „Die Deutsche Bank hat uns das Programm vorgeschlagen, das wir dann aus einer Reihe von Instrumenten ausgewählt haben“, sagt Steffen Wollek. ZMDI profitiere nun von günstigen Zinsen und mehr Finanzierungsspielraum. „Vor allem aber hat uns die langfristige Ausrichtung überzeugt.“ Über fünf Jahre läuft die Förderung – die ersten zwei Jahre sind tilgungsfrei. „Das ist angesichts der herrschenden Innovationszyklen für uns wichtig“, sagt Wollek. „Es geht darum, nachhaltiges Wachstum abzusichern und nicht nur eine kurzfristige Unterstützung des operativen Geschäfts.“ Denn das Ziel von ZMDI lautet: „Wir wollen weiter wachsen.“ In den USA und Asien hat das Unternehmen schon Standorte aufgebaut. Der Zugang zu den Garantien des Europäischen Investitionsfonds wird für Unternehmen wie ZMDI einfacher, da die EIB-Tochter nicht nur einzelne Projekte unterstützt. Firmen mit bis zu 500 Beschäftigten weisen nun anhand von einfachen Kriterien nach, dass sie innovativ sind, um die Garantie zu erhalten. Die Deutsche Bank ist hierzulande das erste Institut, das dabei mit dem EIF kooperiert. Unterstützt dieser ein Unternehmen, senkt das die Risiken – Kredite können so zu günstigeren Konditionen vermittelt werden. Ergänzend wirkt das ERP-Innovationsprogramm der staatlichen Förderbank KfW – es begünstigt Mittelständler bei Investitionen in Neu- und Weiterentwicklungen. „Hier wird nicht das Unternehmen als solches gefördert, sondern ein spezielles förderfähiges Projekt. Dabei ist eine umfassendere Dokumentation erforderlich“, sagt Sabine Tieves. Der Einsatz kann sich freilich lohnen. „In Bezug auf die Zinsen ist es eines der günstigsten Angebote. Zudem kann ein unbesichertes Nachrangdarlehen gewährt werden, es ist Eigenkapital auf Zeit.“ Für results haben die Fördermittel-Experten der Deutschen Bank die wichtigsten Förderinstrumente für Mittelständler analysiert (Tabelle Seite 22). Deutsche Bank_r e s u l t s Enge Kooperation mit den Kunden schon im Entwicklungsprozess – nach diesem Prinzip arbeitet der thüringische Kunststoffspezialist Friedola Tech. „Wir verstehen uns als Innovationstreiber für die Logistik und die Fahrzeugindustrie“, sagt der kaufmännische Leiter Werner Eisenhardt. Leichtbau und Wiederverwendungsfähigkeit nennt er als zentrale Kriterien der Entwicklungsarbeit seines Unternehmens. „CON-Pearl“ heißt ein Kunststoffmaterial von Friedola Tech – Hohlkammern machen es besonders leicht, eine Glasfaserverstärkung sorgt für hohe Stabilität. CON-Pearl ist die Basis für zahlreiche Produkte des Unternehmens. Ein Beispiel: Kofferraumböden für einen deutschen Autohersteller. „Wir haben 2012 das gemeinsame Entwicklungsprojekt vorgeschlagen“, sagt Eisenhardt. In diesem Fall kamen die Partner auch ohne Innovationsförderung ans Ziel – heute liefert Friedola Tech die gesamte Kofferraumauskleidung. Öffentliche Mittel setzt das Unternehmen dagegen ein, um sein Grundprodukt zu verbessern – eine neue Laminieranlage soll die Produktionskapazität verdoppeln. „Wir arbeiten in diesem Zuge auch an neuen Eigenschaften des Materials“, sagt Eisenhardt – feuerhemmend und leitfähig soll es sein. „Das lässt sich mit der neuen Anlage besonders gut machen.“ Rund sechs Millionen Euro 28,6 % aller mittelständischen Unternehmen in Deutschland stützen sich bei der Finanzierung ihrer F&E-Aufwendungen auf öffentliche Fördermittel, ergab eine Umfrage des DIW. Quelle: DIW Berlin; 1391 Unternehmen befragt; Mittelwerte in Prozent Finanzierung_Innovationen investiert Friedola Tech. „Dem gegenüber steht ein Produktionswert von rund 25 Millionen Euro pro Jahr“, sagt Eisenhardt. Wichtigster Abnehmer ist die Logistikbranche – Friedola Tech produziert für sie Transportbehälter für Schüttgüter, etwa Granulate. Gefördert wird das Projekt wie bei ZMDI über EIB und EIF – auch Friedola Tech mit seinen rund 400 Mitarbeitern erhält so einen günstigen Zinssatz. „Wir haben mit der Bank eine umfangreiche Marktstrategie ausgetauscht und verschiedene Anwendungsfelder für unsere Innovationen aufgeführt“, erläutert Eisenhardt. „Wir sind jetzt insgesamt als innovatives Unternehmen eingestuft. Spezielle Projekte zu zeigen, war deshalb nicht nötig.“ Parallel zur EU bieten auch Bund und Länder Unterstützung für forschungsstarke Mittelständler. Auf dem Weg zu internationalen Märkten nutzt die Montanhydraulik AG in Holzwickede günstige Darlehen der NRW.BANK. Fünf Jahre Laufzeit, ein Zins von gerade einmal 1,45 Prozent – „das ist für uns äußerst vorteilhaft“, sagt der kaufmännische Geschäftsführer Josef Mertens. Das Geld trägt dazu bei, eine neue Drehmaschine zu finanzieren, mit der Montanhydraulik weltweit neue Kunden erschließen will. Kostenpunkt: 1,5 Millionen Euro. Staat beteiligt sich an Risiken Das seit 60 Jahren familiengeführte Unternehmen arbeitet in großen Dimensionen: In Staudämmen und Schleusen oder mobilen Kränen kommen Hydraulikzylinder der Westfalen zum Einsatz – zuletzt erwirtschaftete Montanhydraulik mit weltweit 1100 Mitarbeitern etwa 225 Millionen Euro Umsatz. Dank der neuen Produktionsanlage sind noch mächtigere Varianten möglich. Die 28 Meter langen Zylinder werden auf Öl-, Gasplattformen und Bohrschiffen eingesetzt. „Die Maschine ist vom Grundkonzept her keine Innovation“, erläutert Mertens. „Aber sie versetzt uns in die Lage, ein innovatives Produkt herzustellen, das speziell auf den jeweiligen Kunden zugeschnitten ist.“ So kann künftig ein weltweit führender Hersteller von Geräten für die Gas- und Erdölexploration beliefert werden. Bei Montanhydraulik wird nicht nur der Entwicklungseinsatz mit Mitteln der NRW.BANK belohnt. Es gibt am Produkt einige Verbesserungen, etwa bei der Steuerung. „Die neue Maschine ist beim Stromverbrauch deutlich günstiger als ihr Vorläufer“, sagt Mertens. Der Einsatz für eine bessere Energiebilanz öffnete den Weg zu einem weiteren Fördertopf, der Investitionen in 21 Niedrige Zinsen dank EU-Garantie Deutsche Bank stellt bis zu 120 Mio. Euro zusätzlich bereit Mit einem Garantieprogramm unterstützt der Europäische Investitionsfonds (EIF) Firmen, die weniger als 500 Beschäftigte haben. „Risk Sharing Instrument“ heißt die neue Form der Risikobeteiligung. Binnen zwei Jahren kann die Deutsche Bank als erster Partner des EIF in Deutschland innovativen Unternehmen dank einer 50-Prozent-Garantie bis zu 120 Millionen Euro an zusätzlichen Mitteln zu günstigen Konditionen bereitstellen. Der EIF deckt bei Zahlungsverzug oder -ausfall 50 Prozent des ausstehenden Kreditbetrags. „Damit ändert sich die Risikobetrachtung fundamental“, sagt Johannes Winkler, Experte für öffentliche Förderung bei der Deutschen Bank. Mit dem Programm verbunden ist ein Umsteuern der EU bei der Förderung. Unternehmen müssen nicht mehr einzelne Projekte dokumentieren, sondern nur eines von mehreren Kriterien erfüllen, um als innovativ kategorisiert zu werden und so Zugang zu den Mitteln zu erhalten. Dazu zählt unter anderem der Sitz in einem Technologiepark, die Registrierung eines Patents oder der Erhalt eines Innovationspreises innerhalb der vergangenen 24 Monate. WEITERE INFORMATIONEN Kontakt: Ihr Kundenbetreuer. Europäische Investitionsbank: www.eib.org Förderung des Europäischen Investitionsfonds: www.eif.org Innovationsförderung der KfW für den Mittelstand: www.kfw.de, Stichwort „Mittelstandsförderung“ 22 Finanzierung_Innovationen Deutsche Bank_r e s u l t s „Förderung ist wichtig, weil wir die Arbeit nicht sofort in Umsatz ummünzen können“ eine höhere Effizienz unterstützt. Lange habe das Unternehmen ganz ohne öffentliche Unterstützung gearbeitet. „Wir reinvestieren immer einen Großteil der Gewinne in die Firma“, erläutert Geschäftsführer Mertens. Die internationale Expansion aber habe Mittel erfordert, die allein mit Bordmitteln nicht zu stemmen waren. „Die Bank hat die Rolle des Initiators übernommen, den gesamten Prozess begleitet und auch die bürokratischen Formalitäten übernommen“, sagt Mertens. Er ist überzeugt, dass sich der Einsatz für sein Unternehmen und den Standort gleichermaßen auszahlen wird. „Bislang gab es weltweit nur wenige Unternehmen, die solche Zylinder herstellen konnten. Nun können wir in den Wettbewerb einsteigen.“ In Indien beispielsweise will Montanhydraulik das Geschäft ausbauen. „Dort wird stark auf Stromerzeugung aus Wasserkraft gesetzt“, sagt Mertens. „Für die Betätigung der Schleusentore sind Großzylinder nötig.“ Er erwartet, dass die neuen Hydraulikzylinder mit- telfristig zehn bis 15 Prozent zum Gesamtumsatz des Unternehmens beisteuern können. „Auch in Südamerika gibt es großes Potenzial für uns“, sagt Mertens. „Allerdings sind große Investitionen nötig, um dort Fuß zu fassen. Es funktioniert nur mit einem Partner vor Ort.“ Innovation und Internationalisierung – auch bei pfm medical ist beides eng verwoben. Stetig hat der Kölner Medizintechnikhersteller den Auslandsanteil am Umsatz gesteigert, zuletzt lag er bei gut 40 Prozent. Tendenz: weiter steigend. Die Produktpalette ist breit: Skalpelle, Beatmungsmasken, chirurgische Implantate sowie Produkte für das Therapiemanagement zählen dazu. Für Wachstum sorgen vor allem die entwicklungsintensiven Produkte – neben den Implantaten sind das Produkte, die bei Herzoperationen zum Einsatz kommen. Um 13,6 Prozent stockte pfm medical 2012 seine F&EAusgaben auf. „Das ist bei uns sehr langfristig ausgerichtet“, sagt Finanzvorstand Reinhard Blunck. „Da ist Förderung besonders wichtig, weil wir die Ausgewählte Konzepte zur Innovationsfinanzierung über öffentliche Fördermittel für den Mittelstand Förderung Instrument Globaldarlehen der Europäischen Investitionsbank das innovative Projekt diverse Förderprogramme der KfWBankengruppe diverse Förderprogramme der Landesförderinstitute das innovative Unternehmen Garantie des Europäischen Investitionsfonds Wirkung Günstigere Kundenzinssätze (gegenüber normaler Bankfinanzierung) führen zu verminderten Zinsbelastungen für den Kunden. ergänzt andere bankübliche Sicherheiten des Kunden und führt zu einer Verringerung des Risikopreises Finanzierungsvolumen pro Vorhaben Varianten antragsberechtigte Unternehmen bis 12,5 Millionen Euro Kombination mit der Garantie des Europäischen Investitionsfonds < 3000 Beschäftigte in der Regel bis 5 Millionen Euro MezzanineTranchen bis 500 Millionen Euro Umsatz in der Regel bis 10 Millionen Euro Haftungsfreistellungen in der Regel bis 500 Millionen Euro Umsatz bis 7,5 Millionen Euro Kombination mit dem Globaldarlehen der Europäischen Investitionsbank < 500 Beschäftigte Zugang Zugang zu allen hier genannten Finanzierungsmöglichkeiten vermittelt der Firmenkundenbetreuer. Darüber hinaus bestehen weitere Finanzierungsmöglichkeiten für Innovationen z. B. über Zuschüsse oder Eigenkapital. Wichtig: In vielen Fällen muss die Beantragung von öffentlichen Fördermitteln vor dem Beginn der Investition erfolgen. Arbeit nicht so schnell in Umsatz ummünzen können.“ Der Staat trägt so einen Teil der Risiken, die das Unternehmen nicht allein eingehen könnte. Zehn bis 15 Prozent der Produkte seien jünger als fünf Jahre. Rund 85 Millionen Euro Umsatz erzielte das Unternehmen 2012. Die zunehmende Bedeutung von Forschung und Entwicklung spiegelt einen Strategiewechsel. Von einem Handelsunternehmen für Medizinprodukte wandelt sich pfm medical zunehmend zum Produzenten. „Wir wollen unabhängiger sein“, sagt Blunck. Eigene Patente sind hier die Basis für den künftigen Markterfolg. Finanzierung_Innovationen 23 FOTOS: MONTANHYDRAULIK GMBH (2), Deutsche Bank_r e s u l t s Montanhydraulik: Neue Maschinen Es zählt jeder Meter: Ein norwegischer Kunde fragte an, ob Montanhydraulik für Öl- und Gasplattformen auch 28 Meter lange Hydraulikzylinder herstellen könne. Eine ganz neue Maschine musste her, um den Wunsch erfüllen zu können. Die nötige Investition hat das westfälische Unternehmen mit Unterstützung der NRW.BANK gestemmt. „Das bringt uns bei der internationalen Expansion voran“, erläutert der kaufmännische Geschäftsführer Josef Mertens (Foto). Das Unternehmen strebt in Segmente, die für Konzerne und Branchengrößen ein zu geringes Umsatzvolumen bieten. „Wir wollen nicht mit den ganz Großen in Wettbewerb treten.“ Als Beispiel nennt Blunck Schneidewerkzeuge für spezielle Gewebeanalysen. „Unser Umsatz beträgt hier 17 bis 18 Millionen Euro – das entspricht einem Marktanteil von 90 Prozent. Für Großkonzerne ist das nicht interessant.“ Bei der Forschung sieht er Familienunternehmen im Vorteil, weil das Management in der Regel weniger kurzfristig orientiert sei. Ein Nachteil seien dagegen die eingeschränkteren finanziellen Mittel. „Bei der Finanzierung gehört immer eine Bank dazu, die weiß, wo die relevanten Fördertöpfe sind“, sagt Blunck. „Das ist für uns als Mittelständler entscheidend, weil wir im Unternehmen niemanden haben, der sich allein darum kümmern könnte.“ Lang habe es eine „gewisse Aversion“ in Bezug auf öffentliche Förderung gegeben. „Es hatte sehr viel mit Steuerung und Bürokratie zu tun, zum Teil ging es um tiefgreifende juristische Fragen“, sagt Blunck. „Zudem war die Förderung oft zu stark auf Großunternehmen ausgerichtet.“ Das habe sich inzwischen glücklicherweise geändert. „Zunehmend müssen wir feststellen, dass sich das gemeinsam gut managen lässt.“ T H OMA S MERSCH FOTOS: PFM MEDICAL AG (2) Familienunternehmen haben Bedarf pfm medical: Forschung mit Geduld EU, Bund, Länder – innovative Unternehmen, die öffentliche Förderung nutzen wollen, finden eine Reihe von Anlaufpunkten. Für den Kölner Medizintechnikhersteller pfm medical bot ein Zuschuss der EU die richtige Förderung. Rund 34 000 Euro flossen in Forschung und Entwicklung. Solche Förderung ist wichtig: „Unsere Forschung ist langfristig ausgerichtet, wir können die Arbeit nicht immer schnell in Umsatz ummünzen“, sagt Finanzvorstand Reinhard Blunck (Foto). ZMDI RELEASES THE ZSSC416X SENSOR SIGNAL CONDITIONER FAMILY Press Release for the product release ZSSC416x July 2014 on industryeurope.net ZMDI RELEASES THE ZSSC416X SENSOR SIGNAL CONDITIONER FAMILY 09/07/2014 ZMD AG (ZMDI), a Dresden-based semiconductor company that specializes in enabling energy-efficient solutions, today announces the ZSSC416x, the first in ZMDI’s series of next generation of sensor signal conditioners. As a global supplier of analog and mixed-signal solutions for automotive, industrial, medical, information technology and consumer applications, ZMDI is pleased to introduce a state-of-the-art sensor signal conditioning family capable of measuring single, dual or differential bridge inputs and internal or external temperature sensors. With a wide analog pre-amplification range, the ZSSC416x family is capable of highly accurate amplification and sensor-specific correction for most resistive bridge sensors as well as thermocouple readings. Measured values are provided via the digital SENT 3.0 output or I2C™ (trademark of NXP). “The ZSSC416x is the first series of products from ZMDI’s Next Generation Sensor Signal Conditioner Family designed for ease of integration into our customers’ sensor platforms without sacrificing the performance and flexibility needed for the lowest possible system costs,” stated Steve Ramdin, Global Product Line Manager for Multi Market Sensor Platforms at ZMDI. In addition to the highest proven performance needed for future products, ZMDI’s Next Generation Sensor Signal Conditioner Family ICs offer customers ease of use with a wide range of predefined signal processing configurations and flexible input pin selection for quick integration in a wide variety of applications. Mr. Ramdin added, “Our goal is to make it easy for our customers to build flexible sensor platforms, quickly and at the lowest possible system costs.” Features • Full SENT Rev 3.0 compliance • Two full bridge sensor inputs; configurable for single, dual or differential measurements • Internal and external temperature sensing • Supply voltage range: 4.75V to 5.25V • Overvoltage protection to +/- 18V • ADC resolution: 12 to 18 bit • Output resolution: 12 bit via SENT and up to 16 bit for I2C™ • Designed for ASIL B requirements in safety-relevant applications • Temperature range:-40°C to 150°C • Flexibility for end applications (e.g., additional NTC linearization, algorithms for HTS sensors, calculation of mass flow) • Standardized pin layout for family ICs facilitates platform designs With built-in overvoltage and reverse polarity protection, excellent electromagnetic compatibility and built-in diagnostics features, the ZSSC416x family is optimized for safety critical applications and harsh environments. Availability and Pricing The ZSSC416x family will be available for mass production in December 2014; however, interested customers can receive samples and pricing today by contacting ZMDI or their distribution partners directly. Contact Name: E-Mail Address: Phone Number: ( ) Distributor/Rep. Firm: Zentrum Mikroelektronik Dresden AG Zentrum Mikroelektronik Dresden AG Global Headquarters Grenzstrasse 28 01109 Dresden Germany Central Office Phone+49.351.8822.306 Fax +49.351.8822.337 European Technical Support Phone +49.351.8822.7.772 Fax +49.351.8822.87.772 European Sales (Stuttgart) Phone +49.711.674517.55 Fax+49.711.674517.87955 .com Zentrum Mikroelektronik Dresden AG, Japan Office 2nd Fl., Shinbashi Tokyu Bldg., 4-21-3, Shinbashi, Minato-ku, Tokyo, 105-0004 Japan Phone +81.3.6895.7410 Fax +81.3.6895.7301 ZMD America, Inc. 1525 McCarthy Blvd., #212 Milpitas, CA 95035-7453 USA Phone 1.855.275.9634 (USA) Phone+408.883.6310 Fax+408.883.6358 Zentrum Mikroelektronik Dresden AG, Korea Office U-space 1 Building 11th Floor, Unit JA-1102 670 Sampyeong-dong, Bundang-gu, Seongnam-si, Gyeonggi-do, 463-400 Korea Phone +82.31.950.7679 Fax +82.504.841.3026 ZMD Far East, Ltd. 3F, No. 51, Sec. 2, Keelung Road 11052 Taipei Taiwan Phone +886.2.2377.8189 Fax +886.2.2377.8199 v2.06/14