hosted by
publicationslist.org
    
Shubhankar Basu

basu.shubhankar@gmail.com

Conference papers

2009
 
DOI 
S Basu, B Kommineni, R Vemuri (2009)  Variation-Aware Macromodeling and Synthesis of Analog Circuits Using Spline Center and Range Method and Dynamically Reduced Design Space   In: 22nd International Conference on VLSI Design, 2009 433 - 438 University of Cincinnati Cincinnati, OH: IEEE Computer Society  
Abstract: Manufacturing and process irregularities in nanometer technologies can degrade yield and severely slow down the design cycle time. Process variation aware methodologies can help in yield improvement and meeting time-to-market requirements for system-on-chip designs. Analog circuits are extremely sensitive to device mismatches and exhibit non-linear variations in their performance under the influence of manufacturing irregularities. Performance variation in blocks can lead to degraded system performance. In this work, we present a variation-aware performance macromodeling technique for analog building blocks that is fast and accurate and guarantees convergence during synthesis. The improvements in accuracy and time complexity of the macromodel generation process is achieved by constructing a target design region graph and dynamic reduction of the design space. The target design region also helps in reducing time during re-synthesis and achieving faster convergence. Experimental results demonstrate the accuracy of the macromodels and the reduction in synthesis time compared to spice based simulation-in-the-loop evaluations and static and adaptive sampling based techniques.
Notes:
2008
 
DOI 
S Basu, B Kommineni, R Vemuri (2008)  Mismatch Aware Analog Performance Macromodeling Using Spline Center and Range Regression on Adaptive Samples   In: 21st International Conference on VLSI Design, 2008. VLSID 2008. 287 - 293 University of Cincinnati Cincinnati, OH: IEEE Computer Society  
Abstract: Analog design traditionally relies on designer's knowldge and expertise. Numerous automated synthesis methods have been proposed over the years; they reduce time complexity and explore wider design space. Manufacturing induced defects in the process parameters, render device characteristics inconsistent with their prediced behavior. Device mismatch causes significant variation in analog circuit performance. Monte Carlo simulation is known to be the most accurate method of measuring performance under random variation. But monte-carlo simulation is prohivitively expensive during synthesis process. In this work we present a novel Spline Center and Range Regression (SCRR) technique on adaptive samples to model performance in the presence of process variation. Mismatch aware macromodels can provide considerable speedup during synthesis with minimal loss in accuracy. Experimental results demonstrate the accuracy of the macromodels on an independent validation set using 180nm and 65nm technologies.
Notes:
 
DOI 
S Basu, B Kommineni, R Vemuri (2008)  Variation Aware Spline Center and Range Modeling for Analog Circuit Performance   In: 9th International Symposium on Quality Electronic Design, 2008. ISQED 2008. 162-167 University of Cincinnati Cincinnati, OH: IEEE Computer Society  
Abstract: With scaling technologies, process variations have increased significantly. This has led to deviations in analog performance from their expected values. Performance macromodeling aids in reduction of synthesis time by removing the simulation overhead. In this work, we develop a novel spline based center and range method (SCRM) for process variation aware performance macro-modeling (VAPMAC) which works on interval valued data. Experiments demonstrate around 200K times computational time advantage using VAPMAC generated macromodels over SPICE Monte Carlo simulation. The results also demonstrate less than 10% loss in accuracy in computing the performance bounds using the macromodels compared to the SPICE simulations.
Notes:
2007
 
DOI 
B Kommineni, S Basu, R Vemuri (2007)  A spline based regression technique on interval valued noisy data   In: Sixth International Conference on Machine Learning and Applications, 2007. ICMLA 2007. 241 - 247 University of Cincinnati Cincinnati, OH: IEEE Computer Society  
Abstract: In this paper we present a spline based center and range method (SCRM) to perform regression on interval valued noisy data. The method provides a fast and accurate mechanism to model and predict upper and lower limits of unknown functions in a bounded design space. This technique is superior to previously existing techniques like center and range linear least square regression (CRM). The accurate models may find wide usage in high precision applications. The effectiveness of the proposed technique is demonstrated through experiments on datasets with various applications.
Notes:
 
DOI 
S Basu, R Vemuri (2007)  Process Variation and NBTI Tolerant Standard Cells to Improve Parametric Yield and Lifetime of ICs   In: IEEE Computer Society Annual Symposium on VLSI, 2007. ISVLSI '07. 291 - 298 University of Cincinnati Cincinnati, OH: IEEE Computer Society  
Abstract: Negative bias temperature instability (NBTI) has become a growing concern in nanometer technologies. It may reduce the lifetime of reliable operation of PMOS transistors in the design. Process variation has started impacting the nanometer ICs by reducing the parametric yield. Process variation together with NBTI can further reduce the reliable lifetime of ICs. Conventional ASIC design methodology uses pre-characterized standard cells to optimize the design as per specifications. The standard cells occupy nearly 75% of the chip real estate in a sea-of-gate design. Therefore process variation and NBTI tolerant robust standard cells may help in reducing the margin of performance variation thereby increasing the lifetime of reliable operation. The use of robust cells may further help in reducing the design time overhead. In this work, the authors model the combined effect of process variation and NBTI on intrinsic gate delay using a reduced dimension modeling technique. The authors use the models to optimize the standard cells in the presence of NBTI and process variations with a target lifetime of 10 years. Experimental results show that the use of optimized robust standard cells can considerably improve the tolerance of circuit in the self-timed sections of critical timing paths.
Notes:
 
DOI 
S Basu, P Thakore, R Vemuri (2007)  Process Variation Tolerant Standard Cell Library Development using Reduced Dimension Statistical Modeling and Optimization Techniques   In: ISQED '07. 8th International Symposium on Quality Electronic Design, 2007. 814-820 University of Cincinnati Cincinnati, OH 45221: IEEE Computer Society  
Abstract: Parametric yield has a direct impact on the profit yield of designs. In sub-90nanometer domains, ensuring acceptable parametric yield by corner case analysis has become inaccurate. Increasing clock requirements and process variations, necessitates the use of statistical modeling and analysis techniques for performance optimization. However, the dimensionality of statistical techniques due to the randomness of process variations has continued to grow, resulting in increased design complexity and run-time, and degrading accuracy. Design of standard cell libraries that are tolerant to process variations is still inadequate. This continues to result in expensive re-spins leading to significant design time overhead and low profit yield. In this paper, the authors present a novel technique to build analytical equivalent models, using statistical techniques, for intra-gate variability of physical parameters. This reduces the dimension of the response surface method to model the gate delay. The authors use these models to optimize the gate delay in the presence of process variations. Experimental results show the effectiveness of using the variation tolerant standard cells, resulting in better performance tolerance in designs.
Notes:

PhD theses

2008
Shubhankar Basu (2008)  Performance Modeling and Optimization Techniques in the Presence of Random Process Variations to Improve Parametric Yield of VLSI Circuits.   University of Cincinnati Department of Electrical and Computer Engineering, Cincinnati, OH 45221:  
Abstract: As semiconductor industry continues to follow Moore's Law of doubled device count every 18 months, it is challenged by the rising uncertainties in the manufacturing process for nanometer technologies. Manufacturing defects lead to a random variation in physical parameters like the dopant density, critical dimensions and oxide thickness. These physical defects manifest themselves as variations in device process parameters like threshold voltage and effective channel length of transistors. The randomness in process parameters affect the performance of VLSI circuits which leads to a loss in parametric yield. Conventional design methodologies, with corner case based analysis techniques fail to predict the performance of circuits reliably in the presence of random process variations. Moreover, the analysis techniques for detection of defects in the later stages of the design cycle result in significant overhead in cost due to re-spins. In recent times, VLSI computer aided design methodologies have shifted to statistical analysis techniques for performance measurements with specific yield targets. However, the adoption of statistical techniques in commercial design flows has been limited by the complexity of their usage and the need for generating specially characterized models. This also makes them unsuitable in repeated loops during the synthesis process. In this dissertation, we present an alternate approach to model and optimize the performance of digital and analog circuits in the presence of random process variations. Our work is targeted for a bottom-up methodology providing incremental tolerance to the circuits under the impact of random process variations. The methodologies presented, can be used to generate fast evaluating accurate macromodels to compute the bounds of performance due to the underlying variations in device parameters. The primary goal of our methodology is to capture the statistical aspects of variation in the lower levels of abstraction, while aiding deterministic analysis during the top level design optimization. We also attempt to build our solutions as a wrapper around a conventional design flow, without the requirement for special characterization. The modeling and optimization techniques are perfectly scalable across technology generations and can find practical usage during variation-tolerant synthesis of VLSI circuit performance.
Notes:
Powered by publicationslist.org.