hosted by
publicationslist.org
    

Imran Sarwar Bajwa

Department of Computer Science & IT
The Islamia University of Bahawalpur,
Pakistan
imran.sarwar@iub.edu.pk
I am Assistant Professor of Computer Science & IT at The Islamia University of Bahawalpur, Pakistan. I did my PhD research in the field of Automated Software Modelling at the School of Computer Science, University of Birmingham, UK in 2012. I worked in University of Coimbra, Portugal as guest researcher on a project funded by European Union an FCT, Portugal in 2006-07. I have been teaching and doing research at various universities of Pakistan, Portugal and UK since 2003. I am active member of IEEE, ACM, AAAI, ACA, EATCS, SCTA, and IWA. I am author of 04 books and more than 80 journal & conference papers.

Books

2011
Imran Sarwar Bajwa, S Irfan Hyder (2011)  Image Classification of Single Layered Cloud Types   Germany: LAMBERT Academic Publishing (LAP) 1: 1 isbn:978-38-44328-26-4  
Abstract: An automatic classification system is presented, which discriminates the different types of single- layered clouds using Principal Component Analysis (PCA) with enhanced accuracy and provides fast processing speed as compared to other techniques. The system is first trained by cloud images. In training phase, system reads major principal features of the different cloud images to produce an image space. In testing phase, a new cloud image can be classified by comparing it with the specified image space using the PCA algorithm. Weather forecasting applications use various pattern recognition techniques to analyze clouds' information and other meteorological parameters. Neural Networks is an often-used methodology for image processing. Some statistical methodologies like FDA, RBFNN and SVM are also being used for image analysis. These methodologies require more training time and have limited accuracy of about 70%. This level of accuracy often degrades classification of clouds, and hence the accuracy of rain and other weather predictions is reduced. PCA algorithm provides a more accurate cloud classification that yield better and concise forecasting of rain.
Notes:
M Asif Naeem, Imran Sarwar Bajwa, M Abbas Choudhary (2011)  Web Content Mining from Hidden Web   Germany: LAMBERT Academic Publishing (LAP) 1 isbn:978-38-44380-86-6  
Abstract: World Wide Web is enormous compilation of multi- variant data. For better knowledge management it is important to retrieve accurate and complete data. The hidden Web, also known as the invisible Web or deep Web, has given rise to a new issue of Web mining research. Most documents in the hidden Web, including pages hidden behind search forms, specialized databases, and dynamically generated Web pages, are not accessible by general Web mining application. In this paper a system is designed that has a robust ability to access these hidden web pages using web structure mining techniques for better knowledge management. As dynamic content generation is used in modern web pages and user forms are used to get information from a particular user and stored in a database. The link structure lying in these forms can not be accessed during conventional mining procedures. The accuracy ratio of web page hierarchical structures can be improved by including these hidden web pages in the process of Web structure mining. The designed system is adequately strong to process the dynamic Web pages along with static ones.
Notes:
2006
2005

Journal articles

2013
2012
Hina Afreen, Imran Sarwar Bajwa (2012)  A Framework for Automated Object Oriented Analysis of Natural Language Software Specifications   International Journal of Software Engineering and Its Applications 6: 2. 15-22 April  
Abstract: The currently available approaches for processing natural language (NL) software requirements specifications are semi-automatic and require user intervention. Moreover, these approaches result in less accurate and imprecise object oriented software models. Recent research in the area attributes the informal nature of natural languages for less accurate analysis of software requirements. On the basis of this axiom, we have identified that direct translation of a natural language to a formal language is the actual problem. In this paper, we propose that instead of direct translation of a natural language to a formal language, first, we will transform the natural language text to a semi-formal language that is not only simple and easy to translate to a formal language but also provides with higher accuracy. We have incorporated Structured Business Vocabulary and Rules (SBVR) language as the semi-formal medium in natural language to object-oriented models. The presented approach automatically generates the object-oriented software models from natural language software specifications using SBVR as a pivot representation.
Notes:
Ashfa Umber, M Shahid Naweed, Tayyiba Bashir, Imran Sarwar Bajwa (2012)  Requirements Elicitation Methods   Advanced Materials Research, 433-440: 6000-6006 01  
Abstract: A requirement elicitation is a task that helps a customer to define what is required, and then worked out with great care and nicety of detail. This paper surveys and evaluates some methods for eliciting requirements of computer based systems, what are the categories of these methods, what are the problems that each method involves. the solution of one method leads us to the next method so these methods are interrelated with each other .to avoid problems in one method we need another method so we can say combination of different methods can be used to elicited specific requirements. The insufficient requirements engineering process forms one important factor that cause the failure of an IT project. We elaborate a comparison of requirements development methods, evaluation criteria and also identify common factors that affect the method selection. Requirements are elicited through consultation with stakeholders so it certainly seems simple enough, But isn’t simple it is very hard.
Notes:
Ashfa Umber, Imran Sarwar Bajwa (2012)  A Step Towards Ambiguity Less Natural Language Software Requirements Specifications   International Journal of Web Applications 4: 1. 12-21 March  
Abstract: In modern software engineering practice, the ability to specify ambiguity less software requirements in a natural language (NL) in a seamless way is highly valuable and desirable. Though, the software requirements are typically captured in natural languages (NL) such as English, there is a very high probability that more than half NL requirements can be ambiguous. For example, Mich identified that approx. 72% of the NL requirements are potentially ambiguous. A primary reason of such ambiguous NL requirements is syntactic and semantic ambiguities in a natural language such as English. A problem with ambiguous NL requirements is that a software engineer can miss-interpret requirements and can generate an erroneous and absurd software model. In this paper, we aim to address this challenge by presenting a novel approach that is based a semantically controlled NL representation for software requirements. To generate a semantically controlled NL representation, we propose the use of Semantic of Business Vocabulary and Rules (SBVR) standard. We solve a case study to bear out that a SBVR based controlled representation can not only help in generating accurate and consistent software models but can also simplify the machine processing of requirements. The results show that our approach can be helpful in generating the accurate and consistent software models from NL software requirements. A Java implementation of the used approach is also presented a proof of concept that is also available as an Eclipse plugin.
Notes:
Imran Sarwar Bajwa, M Abbas Choudhary (2012)  Satellite Image Classification for Weather Monitoring   Procedia Engineering 17: 1.  
Abstract: In satellite imagery, low and high pressure zones are very critical for weather predictions. An automatic recognition system has been introduced here, which typically distinguishes the low and high pressure zones in a satellite image to predict the precipitation. Principal Component Analysis (PCA) algorithm has been used for identification of pressure zones where PCA is typically an image processing technique. The system works in two phases. In first phase, NOAA satellite images are used to train the system. In training phase, system identifies and extracts key special features of the input images to produce an image space. While, the testing phase is employed to compare the low and high pressure zones those are extracted from the satellite image. Identification of low and high pressure zones in the satellite images can help in better prediction of the forthcoming weather conditions i.e. Low pressure zones cause cloudy and overcast weather and high pressure zones are predictors of a dry weather.
Notes:
2011
Imran Sarwar Bajwa (2011)  Middleware Design Framework for Mobile Computing   International Journal of Emerging Sciences 1: 1. 38-44 April  
Abstract: Mobile computing is one of the recent growing fields in the area of wireless networking. The recent standardization efforts accomplished in Web services, with their XML-based formats for registration/discovery, service description, and service access, respectively UDDI, WSDL, and SOAP, certainly represent an interesting first step towards open service composition, which MA supports for mobile computing are expected to integrate within their frameworks soon. A middle-ware that can work even if the network parameters are changed can be a better solution for successful mobile computing. A middle-ware is proposed for handling the entire existing problem in distributed environment. Middleware is about integration and interoperability of applications and services running on heterogeneous computing and communication devices. The services it provides - including identification, authentication, authorization, soft-switching, certification and security - are used in a vast range of global appliances and systems, from smart cards and wireless devices to mobile services and e-Commerce.
Notes:
Imran Sarwar Bajwa (2011)  A Framework for Ontology Creation and Management for Semantic Web   International Journal of Innovation Management and Technology 2: 2. 261-265 April  
Abstract: An ontology is a model of reality of the world and the concepts in the ontology must reflect this reality. Ontologies are building blocks of Semantic Web based systems. Creating ontologies is not an easy task and obviously there is no unique correct ontology for any domain. There are many other important issues related to the ontology domain engineering some of which are ontology integration, ontology mapping, ontology translation, ontology reuse and ontology consistency check. Due to unavailability of any standard for ontology building, ontologies on the same subject are different. There are different ontology tools that use different ontology languages. Due to these reasons, interoperability between the ontologies is very low. Current ontology tools concentrate mostly on the functions: create, edit, and inference. Most of the tools do not support the merging of heterogeneous domain ontologies. Moreover, the issues of duplicate information across documents and redundant annotations are major challenges of automatic ontology creation as the automatically populating ontology from diverse and distributed web resources poses significant challenges.
Notes:
Imran Sarwar Bajwa, Ahsan Ali Chaudhri, M Asif Naeem (2011)  Processing Large Data Sets using a Cluster Computing Framework   Australian Journal of Basic and Applied Science 5: 6. 1614-1618 June  
Abstract: Increase in the scientific disciplines has caused large data collections as important community resources. The volume of interesting data is already measured in terabytes and will soon total in peta-bytes. This research proposal presents the issue of processing massive amount of satellite data. A single LEO satellite sends around 2 GB of data in 24 hours of a day. To process this huge amount of data, normal digital computers face constraints like processing time, recourses and cost. A solution is needed that can provide quick way of processing at low cost. Cluster computing is network based distributed environment that can be a solution for fast processing support for huge sized jobs. A middle-ware is typically required in cluster computing. In this proposal a middle-ware is proposed for handling the existing processing problems in distributed environments. In a typical heterogeneous computation, a middleware can be employed to provide incorporation and interoperability in the underlying applications and services.
Notes:
2010
Imran Sarwar Bajwa, Amjad Farooq, Amna Khan (2010)  An Effective e-Learning System for Teaching the Fundamentals of Computing and Programming   International Journal of Multidisciplinary Sciences and Engineering 1: 1. 10-14 September  
Abstract: The great enhancement in the current available technology from past recent years has shown its tremendous effect on the quality of education. From past many years students of the first year chemical engineering courses are offered a course of computing and programming to enhance their logical thinking capabilities, to improve their problem solving skills along with the hands on experience on current computer technology. The problem arises with the fact that most of the students belong to rural areas and have no or very little computer related knowledge. This paper is an effort to purpose an effective eLearning system for teaching programming to the students so that their computational and programming skills along with the basic concepts gets improved.
Notes:
Imran Sarwar Bajwa (2010)  Markov Logics Based Automated Business Requirements Analysis   International Journal of Computer and Electrical Engineering 2: 3. 481-485 June  
Abstract: Automated Software engineering has been an area of interest for NLP scientists for last many decades. Various scientists have investigated applicability of the natural language based interfacing for business/software requirement analysis. Some of them are using rule based approach and some of them have used neural networks and case based reasoning, etc. These techniques provide results up to 70 – 80%. The key objective of this research paper is to investigate the use of recently introduced approach of Markov Logics by Pedro Domingos and have a comparison of the output of this approach with other already employed approaches in the recent past. Results of analyzing the natural language text using Markov Logic has been presented in the next paper.
Notes:
Imran Sarwar Bajwa (2010)  Context Based Meaning Extraction by Means of Markov Logic   International Journal of Computer Theory and Engineering 2: 1. 35-38 February  
Abstract: Understanding meanings and semantics of a speech or natural language is a complicated problem. This problem becomes more vital and classy when meanings with respect to context, have to be extracted. This particular research area has been typically point of meditation for the last few decades and many various techniques have been used to address this problem. An automated system is required that may be able to analyze and understand a few paragraphs in English language. In this research, Markov Logic has been incorporated to analyze and understand the natural language script given by the user. The designed system formulates the standard speech language rules with certain weights. These meticulous weights for each rule ultimately support in deciding the particular meaning of a phrase and sentence. The designed system provides an easy and consistent way to figure out speech language context and produce respective meanings of the text.
Notes:
Imran Sarwar Bajwa (2010)  Virtual Telemedicine Using Natural Language Processing   International Journal of Information Technology and Web Engineering 5: 1. 43-55 January  
Abstract: Conventional telemedicine has limitations due to the existing time constraints in the response of a medical specialist. One major reason is that telemedicine based medical facilities are subject to the availability of a medical expert and telecommunication facilities. On the other hand, communication using telecommunication is only possible on fixed and appointed time. Typically, the field of telemedicine exists in both medical and telecommunication areas to provide medical facilities over a long distance, especially in remote areas. In this article, the authors present a solution for ‘virtual telemedicine’ to cope with the problem of the long time constraints in conventional telemedicine. Virtual Telemedicine is the use of telemedicine with the methods of artificial intelligence.
Notes:
Imran Sarwar Bajwa, Natasha Nigar, M J Arshad (2010)  An Autonomous Robot Framework for Path Finding and Obstacle Evasion   International Journal of Computer Science and Telecommunications 1: 1. 1-6 November  
Abstract: The modern growth of computer and its related hardware is consequence of the invention of the transistor. The advent of transistor principally revolutionized the hardware engineering by reducing the hardware size and increasing the efficiency. Now, we are swarmed in with the influx of sophisticated computer gadgets and communication devices, combining ingenious ideas and state of the art designs. When the history of world would be written surely out contemporary age would be called the age of science and technology. The marvels of science and technology has not only bewildered human minds but also brought convenience and quality to human life. Our project is continuation of this tradition of science. We have endeavored to design an autonomous path tracker vehicle. It has limitless possibilities of usage and it would certainly become a future workhorse. It can be used to detect the theft vehicles. Autonomous robot for path finding and obstacle evasion is a vehicle, which follows the path in two different ways, which are: i) Line Follower – It is a vehicle, which is used to follow the reflecting line drawn on the floor. It captures line position with IR sensors. The sensors will be mounted at front end of the robot, ii) Obstacle Handling – When an obstacle is appeared on the following line we will detect through a sensor.
Notes:
2009
Imran Sarwar Bajwa, R Kazmi, S Mumtaz, M Abbas Choudhary, M Shahid Naweed (2009)  SOA and BPM Partnership: A paradigm for Dynamic and Flexible Process and I.T. Management   International Journal of Humanities and Social Sciences 4: 7. 267-273 September  
Abstract: Business Process Management (BPM) helps in optimizing the business processes inside an enterprise. But BPM architecture does not provide any help for extending the enterprise. Modern business environments and rapidly changing technologies are asking for brisk changes in the business processes. Service Oriented Architecture (SOA) can help in enabling the success of enterprise-wide BPM. SOA supports agility in software development that is directly related to achieve loose coupling of interacting software agents. Agility is a premium concern of the current software designing architectures. Together, BPM and SOA provide a perfect combination for enterprise computing. SOA provides the capabilities for services to be combined together and to support and create an agile, flexible enterprise. But there are still many questions to answer; BPM is better or SOA? and what is the future track of BPM and SOA? This paper tries to answer some of these important questions.
Notes:
Imran Sarwar Bajwa, Shahzad Mumtaz, Ali Samad (2009)  Object Oriented Software Modeling using NLP Based Knowledge Extraction   European Journal of Scientific Research 35: 1. 22-33 August  
Abstract: This paper presents a natural language processing based automated system for NL text to OO modeling the user requirements and generating code in multi-languages. A new rule-based model is presented for analyzing the natural languages (NL) and extracting the relative and required information from the given software requirement notes by the user. User writes the requirements in simple English in a few paragraphs and the designed system incorporates NLP methods to analyze the given script. First the NL text is semantically analyzed to extract classes, objects and their respective, attributes, methods and associations. Then UML diagrams are generated on the bases of previously extracted information. The designed system also provides with the respective code automatically of the already generated diagrams. The designed system provides a quick and reliable way to generate UML diagrams to save the time and budget of both the user and system analyst.ABSTRACT FROM AUTHORCopyright of European Journal of Scientific Research is the property of EuroJournals, Inc. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract.
Notes:
2008
Imran Sarwar Bajwa, Shahzad Mumtaz, M Shahid Naweed (2008)  Database Interfacing using Natural Language Processing   European Journal of Scientific Research 20: 4. 844-851 July  
Abstract: To write technically correct SQL queries is a complex and skill requiring task especially for a novel user. This situation becomes more complex when a low skilled person has to use a database management system for a specific business purpose. S/He has to write some quires at his own and perform various tasks. This scenario requires more expertise and skills in terms of understanding and writing the accurate and functional queries. The task of the novel user can be simplified by providing an easy interface that is well known to that user. In order to resolve all such issues, automated software is needed, which facilitates both users and software engineers. User writes the requirements in simple English in a few statements and the designed system has the ability to analyze the given script. After composite analysis and mining of associated information, the designed system generates the intended SQL queries that can be run directly. The paper describes a system that can create SQL queries automatically. The designed system provides a quick and reliable way to generate SQL queries to save time and budget of both the user and system analyst.
Notes:
Imran Sarwar Bajwa, M Shahid Naweed, M Nadim Asif, S Irfan Hyder (2008)  Feature Based Image Classification by using Principal Component Analysis   Journal of Graphics, Vision and Image Processing 9: 2. 11-17 March  
Abstract: Classification of different types of cloud images is the primary issue used to forecast precipitation and other weather constituents. A PCA based classification system has been presented in this paper to classify the different types of single-layered and multi-layered clouds. Principal Component Analysis (PCA) provides enhanced accuracy in features based image identification and classification as compared to other techniques. PCA is a feature based classification technique that is characteristically used for image recognition. PCA is based on principal features of an image and these features discreetly represent an image. The used approach in this research uses the principal features of an image to identify different cloud image types with better accuracy. A classifier system has also been designed to exhibit this enhancement. The designed system reads features of gray-level images to create an image space. This image space is used for classification of images. In testing phase, a new cloud image is classified by comparing it with the specified image space using the PCA algorithm.
Notes:
2007
Imran Sarwar Bajwa, M Imran Siddique, M Abbas Choudhary (2007)  Web Layout Mining (WLM): A Paradigm for Intelligent Web Layout Design   Egyptian Computer Science Journal 29: 2. 54-63 May  
Abstract: The problem in designing of modern website projects is to produce contents according to the latest trends and styles. The common website editors just help to draw the intended layouts but the problem is to design the accurate web layout according to the demand and latest trends and style. This approach is useful when the user has a specific layout already in mind and is familiar with the web page layout principles as to what kinds of layouts are possible. It is intrinsically difficult for particularly those who have limited artistic and creative abilities to design good layout from scratch which is acceptable in every respect. An automated system is required that has ability to mine the layouts of the desired type of websites. The designed system for “Web Layout Mining (WLM)” helps to mine the most popular web-layouts from the internet database and design a web-layout that is near to acceptable and have all the marks and features of modern requirements. The designed system actually bases on a rule based algorithm which helps the user to search out some samples related to his website category and afterwards the user himself chooses a desired web-layout and designs its own one with proper implications and variations according to his own requirements.
Notes:
2006
Imran Sarwar Bajwa, M Abbas Choudhary (2006)  A Rule Based System for Speech Language Context Understanding   International Journal of Donghua University (English Edition) 23: 6. 39-42 June  
Abstract: Speech or Natural language contents are major tools of communication. This research paper presents a natural language processing based automated system for understanding speech language text. A new rule based model has been presented for analyzing the natural languages and extracting the relative meanings from the given text. User writes the natural language text in simple English in a few paragraphs and the designed system has a sound ability of analyzing the given script by the user. After composite analysis and extraction of associated information, the designed system gives particular meanings to an assortment of speech language text on the basis of its context. The designed system uses standard speech language rules that are clearly defined for all speech languages as English,Urdu, Chinese, Arabic, French, etc. The designed system provides a quick and reliable way to comprehend speech language context and generate respective meanings.
Notes:
2005
Imran Sarwar Bajwa, S Irfan Hyder (2005)  PCA Based Classification of Single Layered Cloud Types   Journal of Market Forces 1: 2. 3-13 June  
Abstract: The paper presents an automatic classification system, which discriminates the different types of single-layered clouds using Principal Component Analysis (PCA) with enhanced accuracy as compared to other techniques. PCA is an image classification technique, which is typically used for face recognition. PCA can be used to identify the image features called principal components. A principal component is a peculiar feature of an image. The approach described in this paper uses this PCA capability for enhancing the accuracy of cloud image analysis. To demonstrate this enhancement, a software classifier system has been developed that incorporates PCA capability for better discrimination of cloud images. The system is first trained by cloud images. In training phase, system reads major principal features of the different cloud images to produce an image space. In testing phase, a new cloud image can be classified by comparing it with the specified image space using the PCA algorithm.
Notes:

Book chapters

2012
Imran Sarwar Bajwa, M Abbas Choudhary (2012)  From Natural Language Software Specifications to UML Class Models   In: Lecture Notes in Business Information Processing (NLBIP-102) Edited by:R. Zhang et al.. 224-237 Springer-Verlag Berlin Heidelberg  
Abstract: Software specifications are typically captured in natural languages and then software analysts manually analyzed and produce the software models such class models. Various approaches, frameworks and tool have been presented for automatic translation of software models such as CM-Builder, Re-Builder, NL-OOML, GOOAL, etc. However, the experiments with these tools show that they do not provide with high accuracy in translation. Major reason of less accuracy reported in the literature is the ambiguous and informal nature of the natural languages. In this article, we aim to address this issue and present a better approach for processing natural languages and produce more accurate UML software models. The presented approach is based on Semantic Business Vocabulary and Rules (SBVR) recently adopted standard by OMG. Our approach works as the natural language software specifications are first mapped to SBVR rules representation. SBVR rules are easy to translate other formal representations such as OCL and UML as SBVR is based on higher order logic. A case study solved with our tool NL2UMLviaSBVR is also presented and the a comparative analysis of our tools research with other available tools show that use of SBVR in NL to UML translation helps to improve the accuracy
Notes:
Shazia Kareem, Imran Sarwar Bajwa (2012)  Virtual Telemedicine and Virtual Telehealth: A Natural Language Based Implementation to Address Time Constraint Problem   Edited by:Hannah Abelbeck. 183-195 IGI Global: Models for Capitalizing on Web Engineering Advancements: Trends and Discoveries isbn:9781466600232  
Abstract: Telemedicine is modern technology that is employed to provide low cost but high standard medical facilities to the people of remote areas. Store-and-Forward method of telemedicine suits more to the progressive countries like Pakistan as not only because it is easy to set up but also due to its cheap operating cost. However, the high response time taken by store & forward telemedicine becomes a critical factor in emergency cases, where each minute has a price. The response time factor can be overcome by using virtual telemedicine approach. In virtual telemedicine, a Clinical Decision Support System (CDSS) is deployed at rural station. The CDSS is intelligent enough to diagnose a patient’s disease and prescribe proper medication. In a case, the CDSS can not answer a query, the CDSS immediately sends an e-mail to a medical expert (doctor) and when the response is received the CDSS knowledge-base is updated for the future queries. In this research paper, we not only report a NL-based CDSS that can answer NL queries but also present a complete architecture of a virtual telemedicine setup
Notes:
2008

Conference papers

2012
Kashif Hameed, Imran Sarwar Bajwa, Muhammad Asif Naeem (2012)  A Novel Approach for Automatic Generation of UML Class Diagrams from XMI   In: International Multi-Topic conference (IMTIC) 2012 164-175 Springer  
Abstract: XMI (XML Metadata Interchange) is used to exchange metadata information of UML (Unified Modeling Language) models using XML (Extensible Markup Language) representation. All major CASE tools e.g. ArgoUML, Rational Rose, Enterprise Architect, MS Visio, Altova, Smart Draw, etc can export and import XMI. However, current implementation of XMI in all CASE tools does not fulfill the goal of a model interchange as the CASE tools can just import XMI and extract metadata information but cannot generate UML models such as UML class diagrams. A primary reason of this inability is that XMI only provides the information about what elements are in a UML class model but not the information about how these elements (such as classes, associations, etc) are represented and laid out in diagrams. Without this facility, the real power of XMI is still un-explored. In this paper, we present a Binary Space Portioning (BSP) Tree data structure based novel approach to re-generate UML diagrams from XMI. A VB.NET implementation is also presented as a proof of concept.
Notes:
Imran Sarwar Bajwa, Mark Lee, Behzad Bordbar, Ahsan Ali (2012)  Addressing Semantic Ambiguities in English Constraints   In: The 25th International FLAIRS Conference, 262-267 Florida, USA: AAAI  
Abstract: In NL2OCL project, we aim to translate English specification of constraints to formal constraints such as OCL (Object Constraint Language). In English to OCL translation, our contribution is a semantic analyzer that uses the output of the Stanford parser for shallow and deep semantic parsing. Our analysis of the output of shallow semantic parsing showed that semantic roles were mis-identified for a few English constraints due to semantic ambiguity. Similarly, in deep semantic parsing, it is difficult to resolve scope of quantifier operators due to scope ambiguity that is another sub-type of semantic ambiguity. In this paper, we highlight the identified cases of semantic ambiguities in English constraints. We also present a novel approach to automatically resolve the identified cases of the semantic ambiguities. The presented approach is also evaluated to show that by addressing the identified cases of semantic ambiguities, we can generate more accurate and complete formal (OCL) specifications.
Notes:
Imran Sarwar Bajwa, Mark Lee, Behzad Bordbar (2012)  Semantic Analysis of Software Constraints   In: The 25th International FLAIRS Conference 8-13 Florida, USA:  
Abstract: In this paper, we present a novel approach NL2OCL to translate English specification of constraints to formal constraints such as OCL (Object Constraint language). In the used approach, input English constraints are syntactically and semantically analyzed to generate a SBVR (Semantics of Business Vocabulary and Rules) based logical representation that is finally mapped to OCL. During the syntactic and semantic analysis we have also addressed various syntactic and semantic ambiguities that make the presented approach robust. The presented approach is implemented in Java as a proof of concept. A case study has also been solved by using our tool to evaluate the accuracy of the presented approach. The results of evaluation are also compared to the pattern based approach to highlight the significance of the used approach.
Notes:
Muhammad Asif Naeem, Gillian Dobbie, Gerald Weber, Imran Sarwar Bajwa (2012)  Efficient Usage of Memory Resources in Near-Real-Time Data Warehousing   In: International Multi-Topic conference (IMTIC) 2012 326-337 Springer  
Abstract: In the context of near-real-time data warehousing the user’s updates generated at data source level need to be stored into warehouse as soon as they occur. Before loading these updates into the warehouse they need to be transformed, often using a join operator between the stream of updates and disk-based master data. In this context a stream-based algorithm called X-HYBRIDJOIN (Extended Hybrid Join) has been proposed earlier, with a favourable asymptotic runtime behavior. However, the absolute performance was not as good as hoped for. In this paper we present results showing that through properly tuning the algorithm, the resulting “Tuned X-HYBRIDJOIN” performs significantly better than that of the previous X-HYBRIDJOIN, and better as other applicable join operators found in literature. We present the tuning approach, based on measurement techniques and a revised cost model. To evaluate the algorithm’s performance we conduct an experimental study that shows that the Tuned X-HYBRIDJOIN exhibits the desired performance characteristics.
Notes:
Muhammad Asif Naeem, Gillian Dobbie, Gerald Weber, Imran Sarwar Bajwa (2012)  A Parametric Analysis of Stream Based Joins   In: International Multi-Topic conference (IMTIC) 2012 314-325 Springer  
Abstract: Online stream processing is an emerging research area in the field of computer science. Common examples where online stream processing is important are network traffic monitoring, web log analysis and real-time data integration. One kind of stream processing is used to relate the information from one data stream to other data stream or disk-based data. A stream-based join is required to perform such operations. A survey of the literature shows a number of join operators which can process the stream in an online fashion, but each approach has advantages and disadvantages. In this paper we address a number of well known join operators by grouping them into two categories. In first category we discuss those operators which take all their inputs in the form of stream while in the second category we consider operators in which one input resides on the disk. At the end of the paper we summarise our comparisons for each category on the basis of some key parameters. We believe that this exercise will contribute in further exploring this area.
Notes: Stream-based joins, Data transformation, Performance analysis
Kashif Hameed, Imran Sarwar Bajwa (2012)  Generating Class Models using Binary Space Partition Algorithm   In: 11th IEEE/ACIS International Conference on Computer and Information Science (ICIS 2012) 1-13 Shanghai, China: Springer-Verlag Berlin Heidelberg  
Abstract: In this paper, we address a challenging task of automat generation of UML class models. In conventional CASE tools, the export facility does not export the graphical information that explains the way UML class elements (such as classes, associations, etc) are represented and laid out in diagrams. We address them problem by presenting a novel approach for automatic generation of UML class diagrams using the Binary Space Partitioning (BSP) tree data structure. A BSP tree captures the spatial layout and spatial relations in objects in a UML class model drawn on a 2-D plane. Once the information of a UML model is captured in a BSP tree, the same diagram can be re-generated by efficient partitioning of space (i.e. regions) without any collision. After drawing UML classes, the associations, aggregations and generalisations are also drawn between the classes. The presented approach is also implemented in VB.NET as a proof of concept. The contribution does not only assist in diagram interchange but also improved software modeling.
Notes:
Imran Sarwar Bajwa, Mark Lee, Behzad Bordbar (2012)  Resolving Syntactic Ambiguities in Natural Language Specification of Constraints   In: 13th CICLing 2012, Part I, LNCS 7181, Edited by:A. Gelbukh. 178-187 Dehli, India: Springer, Heidelberg (2012)  
Abstract: In the NL2OCL project, we aim to translate English specification of software constraints to formal constraints such as OCL (Object Constraint Language). In the used approach, the Stanford POS tagger and the Stanford Parser are employed for syntactic analysis of English specification and the output of syntactic analysis is given to our semantic analyzer for the detailed semantic analysis. However, in few cases, the Stanford POS tagger and parser are not able to handle particular syntactic ambiguities in English specifications of software constraints. In this paper, we highlight the identified cases of syntactic ambiguities and we also present a novel technique to automatically resolve the identified syntactic ambiguities. By addressing the identified cases of syntactic ambiguities, we can generate more accurate and complete formal (OCL) specifications.
Notes:
2011
Imran Sarwar Bajwa, M Asif Naeem (2011)  On Specifying Requirements using a Semantically Controlled Representation   In: 16th International Conference on Applications of Natural Languages to Information Systems (NLDB 2011) 217-220 Alicante, Spain: Springer Verlag  
Abstract: Requirements are typically specified in natural languages (NL) such as English and then analyzed by analysts and developers to generate formal software design/model. However, English is ambiguous and the requirements specified in English can result in erroneous and absurd software designs. We propose a semantically controlled representation based on SBVR for specifying requirements. The SBVR based controlled representation can not only result in accurate and consistent software models but also machine process able because SBVR has pure mathematical foundation. We also introduce a java based implementation of the presented approach that is a proof of concept.
Notes:
Hina Afreen, Imran Sarwar Bajwa (2011)  Generating UML Class Models from SBVR Software Requirements Specifications   In: 23rd Benelux Conference on Artificial Intelligence (BNAIC 2011) 23-32 Gent, Belgium:  
Abstract: SBVR is the recent standard, introduced by OMG that can be used to capture software requirements in a natural language (NL) such as English. In this paper, we present a novel approach that can translate SBVR specification of software requirements into UML class models. We want to generate UML class models from SBVR specifications instead of NL specifications of software requirements as NL to UML translation exhibit lesser accuracy due to informal nature of natural languages. SBVR specifications can be quite helpful as SBVR is not only based on higher-order logic and easy to machine process but also easy to understand for human beings. The presented approach works as the user inputs the SBVR specification of software requirements and then the input SBVR is syntactically and semantically analyzed to extract OO information and finally OO information is mapped to a class model. The presented approach is also presented in a prototype tool SBVR2UML that is an Eclipse plugin and a proof of concept. A case study has also been solved to show that the use of SBVR in automated generation of class models provide better accuracy and consistency as compared with other available approaches.
Notes:
Hina Afreen, Imran Sarwar Bajwa (2011)  SBVR2UML: A Challenging Transformation   In: 9th International Conference on Frontiers of Information Technology (FIT) 33-38 Islamabad, Pakistan: IEEE Press  
Abstract: UML is a de-facto standard used for generating the software models. UML support visualization of the software artifacts. To generate a UML diagram, a software engineer has to collect software requirements in a natural language (such as English) or a semi-formal language (such as SBVR), manually analyze the requirements and then manually generate the class diagrams in an available CASE tool. However, by automatically transforming SBVR Software requirements to UML can seriously share burden of a system analyst and can improve the quality and robustness of software modeling phase. The paper demonstrates the challenging aspect of model transformation from SBVR to UML. The presented approach takes input the software requirements specified in SBVR syntax, parses the input specification, extracts the UML ingredients such as classes, methods, attributes, associations, etc and finally generate the visual representation of the extracted information. The presented approach is fully automated. The presented approach is explained via an example.
Notes:
Imran Sarwar Bajwa, Behzad Bordbar, Mark G Lee (2011)  SBVR vs OCL: A Comparative Analysis of Standards   In: 14th IEEE International Multitopic Conference (INMIC 2011) 261-266 Karachi, Pakistan: IEEE Press  
Abstract: In software modelling, the designers have to produce UML visual models with software constraints. Similarly, in business modelling, designers have to model business processes using business constraints (business rules). Constraints are the key components in the skeleton of business or software models. A designer has to write constraints to semantically compliment business models or UML models and finally implementing the constraints into business processes or source code. Business constraints/rules can be written using SBVR (Semantics of Business Vocabulary and Rules) while OCL (Object Constraint Language) is the well-known medium for writing software constraints. SBVR and OCL are two significant standards from OMG. Both standards are principally different as SBVR is typically used in business domains and OCL is employed to compliment software models. However, we have identified a few similarities in both standards that are interesting to study. In this paper, we have performed a comparative analysis of both standards as we are looking for a mechanism for automatic transformation of SBVR to OCL. The major emphasis of the study is to highlight principal features of SBVR and OCL such as similarities, differences and key parameters on which these both standards can work together.
Notes:
Shazia Kareem, Imran Sarwar Bajwa (2011)  A Virtual Telehealth Framework: Applications and Technical Considerations   In: IEEE International Conference on Emerging Technologies 2011 (ICET 2011) NUST Pakistan: IEEE Press  
Abstract: Since last two decades, telehealth has been emerged into an expansion of telemedicine. A telehealth system provides the curative care and preventive care. In telehealth, patient’s data is sent by email to a physician or a medical expert for diagnosis and medical prescription. Telehealth system can be used as real time method or store forward method. However, in perspective of developing countries like Pakistan, real-time telehealth is not feasible due to its high cost. Whereas, the store and forward method based telehealth can be a cost-effect solution. It has been identified that a major problem with store-and-forward method is long time-factor that can be longer to 48 hours in certain cases. The situation may become more verse if patient is in serious condition and requires immediate medication. To address this issue a concept of virtual telehealth is proposed in this paper. We have incorporated a clinical decision support system to reply the patients’ query locally. The presented solution can make telehealth more financially feasible, technically implemental in developing countries like Pakistan.
Notes:
Imran Sarwar Bajwa, Rubata Riasat (2011)  A New Perfect Hashing based Approach for Secure Stegnography   In: IEEE Sixth International Conference on Digital Information Management 102-107 Melbourne, Australia: IEEE Press  
Abstract: Image stegnography is an emerging field of research for secure data hiding for data transmission over internet, copyright protection, and ownership identification. A couple of techniques have been proposed for colour image stegnography. However, the colour images are more costly to transmit on internet due to their size. In this paper, we propose a new perfect hashing based approach for stegnography in grey-scale images. The proposed approach is more efficient and effective that provides a more secure way of data transmission at higher speed. The presented approach is implemented into a prototype tool coded in VB.NET. The presented approach is effective in a way that multiple file formats such as bmp, gif, jpeg, and tiff are also supported. A set of sample images were processed with the tool and the results of the initial experiments indicate the potential of the presented approach not only in terms of secure stegnography but also in terms of fast data transmission over internet.
Notes:
Imran Sarwar Bajwa, Mark G Lee (2011)  Transformation Rules for Translating Business Rules to OCL Constraints   In: 7th Euorpean Conference on Modelling Foundations and Applications (ECMFA 2011) 132-143 University of Birmingham, UK Birmingham, UK: Springer, Verlag  
Abstract: In design of component based applications, the designers have to produce visual such as Unified Modeling Language (UML) models, and describe the software component interfaces. Business rules and constraints are the key components in the skeletons of software components. Semantic of Business Vocabulary and Rules (SBVR) language is typically used to express constraints in natural language and then a software engineer manually maps SBVR business rules to other formal languages such as UML, Object Constraint Language (OCL) expressions. However, OCL is the only medium used to write constraints for UML models but manual translation of SBVR rules to OCL constraints is difficult, complex and time consuming. Moreover, the lack of tool support for automated creation of OCL constraints from SBVR makes this scenario more complex. As, both SBVR and OCL are based on First-Order Logic (FOL), model transformation technology can be used to automate the transformation of SBVR to OCL. In this research paper, we present a transformation rules based approach to automate the process of SBVR to OCL transformation. The presented approach implemented in SBVR2OCL prototype tool as a proof of concept. The presented method softens the process of creating OCL constraints and also assists the designers by simplifying software designing process.
Notes:
Imran Sarwar Bajwa, M Asif Naeem, Ahsan Ali Chaudhri, Shahzad Ali (2011)  A Controlled Natural Language Interface to Class Models   In: 13th International Conference on Enterprise Information Systems (ICEIS 2011) 102-110 Beijing, China: SciTePress  
Abstract: The available approaches for automatically generating class models from natural language (NL) software requirements specifications (SRS) exhibit less accuracy due to informal nature of NL such as English. In the automated class model generation, a higher accuracy can be achieved by overcoming the inherent syntactic ambiguities and semantic inconsistencies in English. In this paper, we propose a SBVR based approach to generate an unambiguous representation of NL software requirements. The presented approach works as the user inputs the English specification of software requirements and the approach processes input English to extract SBVR vocabulary and generate a SBVR representation in the form of SBVR rules. Then, SBVR rules are semantically analyzed to extract OO information and finally OO information is mapped to a class model. The presented approach is also presented in a prototype tool NL2SBVRviaSBVR that is an Eclipse plugin and a proof of concept. A case study has also been solved to show that the use of SBVR in automated generation of class models from NL software requirements improves accuracy and consistency.
Notes:
Ashfa Umber, Imran Sarwar Bajwa, M Asif Naeem (2011)  NL-Based Automated Software Requirements Elicitation and Specification   In: 1st International Conference on Advances in Computing and Communications (ACC-2011) 30-39 Kerala, India: Springer, Verlag  
Abstract: This paper presents a novel approach to automate the process of software requirements elicitation and specification. The software requirements elicitation is perhaps the most important phase of software development as a small error at this stage can result in absurd software designs and implementations. The automation of the initial phase (such as requirement elicitation) phase can also contribute to a long standing challenge of automated software development. The presented approach is based on Semantic of Business Vocabulary and Rules (SBVR), an OMG’s recent standard. We have also developed a prototype tool SR-Elicitor (an Eclipse plugin), which can be used by software engineers to record and automatically transform the natural language software requirements to SBVR software requirements specification. The major contribution of the presented research is to demonstrate the potential of SBVR based approach, implemented in a prototype tool, proposed to improve the process of requirements elicitation and specification.
Notes:
Rubata Riasat, Imran Sarwar Bajwa, Zaman Ali (2011)  A Hash-Based Approach for Colour Image Steganography   In: IEEE International Conference on Computer Networks and Information Technology (ICCNIT 2011) 303-307 Abbottabad, Pakistan: IEEE Press  
Abstract: In this, paper we propose a novel hash-based approach for colour image steganography. As, the available approaches for colour image steganography are using chaos-based and symmetric-key based cryptographic algorithms are not efficient and good for bulky data. However, the hash-based algorithms based approaches are considerably better in terms of providing better speed but these approaches are vulnerable in terms of providing security due to inherent flaws caused by used checksum approach. The key reason of vulnerability is that the used algorithms in such approaches such as MD5 and SHA-2 have flaws. In our approach, we purpose the use of perfect hash-function algorithm to provide a secure and fast approach for colour image steganography. We also present a prototype tool in this paper that is implementation of the presented approach and is also a proof of concept. Another contribution of the approach is that the presented approach can be used for coding data in any type of colour images such as bmp, jpeg, gif, and tiff as other available approaches are file format specific. The results of the initial experiments are very encouraging and support not only the used approach but also uphold the potential of the presented approach in general.
Notes:
Shazia Kareem, Imran Sarwar Bajwa (2011)  Clinical Decision Support System based Virtual Telemedicine   In: 3rd International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC 2011) 16-21 Zhejiang University, China Hangzhou, China: IEEE CS (CPS)  
Abstract: Telemedicine is a blessing for the people of remote areas because it provides high level medical facilities in an efficient way at low cost. Store-and-Forward method of telemedicine suits more to the progressive countries like Pakistan as not only because it is easy to set up but also due to its cheap operating cost. However, the high response time taken by store & forward telemedicine becomes a critical factor in emergency cases, where each minute has a price. The response time factor can be overcome by using virtual telemedicine approach. In virtual telemedicine, a Clinical Decision Support System (CDSS) is deployed at rural station. The CDSS is intelligent enough to diagnose a patient's disease and prescribe proper medication. In a case, the CDSS can not answer a query, the CDSS immediately sends an e-mail to a medical expert (doctor) and when the response is received the CDSS knowledge-base is updated for the future queries. In this research paper, we not only report a NL-based CDSS that can answer NL queries but also present a complete architecture of a virtual telemedicine setup.
Notes:
Imran Sarwar Bajwa, Mark G Lee, Behzad Bordbar (2011)  SBVR Business Rules Generation from Natural Language Specification   In: AAAI Spring Symposium 2011 – Artificial Intelligence 4 Business Agility 541-545 San Francisco, USA: AAAI  
Abstract: In this paper, we present a novel approach of translating natural languages specification to SBVR business rules. The business rules constraint business structure or control behaviour of a business process. In modern business modelling, one of the important phases is writing business rules. Typically, a business rule analyst has to manually write hundreds of business rules in a natural language (NL) and then manually translate NL specification of all the rules in a particular rule language such as SBVR, or OCL, as required. However, the manual translation of NL rule specification to formal representation as SBVR rule is not only difficult, complex and time consuming but also can result in erroneous business rules. In this paper, we propose an automated approach that automatically translates the NL (such as English) specification of business rules to SBVR (Semantic Business Vocabulary and Rules) rules. The major challenge in NL to SBVR translation was complex semantic analysis of English language. We have used a rule based algorithm for robust semantic analysis of English and generate SBVR rules. Automated generation of SBVR based Business rules can help in improved and efficient constrained business aspects in a typical business modelling.
Notes:
Ashfa Umber, Imran Sarwar Bajwa (2011)  Minimizing Ambiguity in Natural Language Software Requirements Specification   In: IEEE Sixth International Conference on Digital Information Management (ICDIM 2011) 174-178 Melbourne, Australia: IEEE Press  
Abstract: Software requirements are typically captured in natural languages (NL) such as English and then analyzed by software engineers to generate a formal software design/model (such as UML model). However, English is syntactically ambiguous and semantically inconsistent. Hence, the English specifications of software requirements can not only result in erroneous and absurd software designs and implementations but the informal nature of English is also a main obstacle in machine processing of English specification of the software requirements. To address this key challenge, there is need to introduce a controlled NL representation for software requirements to generate accurate and consistent software models. In this paper, we report an automated approach to generate Semantic of Business Vocabulary and Rules (SBVR) standard based controlled representation of English software requirement specification. The SBVR based controlled representation can not only result in accurate and consistent software models but also machine process able because SBVR has pure mathematical foundation. We also introduce a java based implementation of the presented approach that is a proof of concept.
Notes:
2010
Imran Sarwar Bajwa, Behzad Bordbar, Mark G Lee (2010)  OCL Constraints Generation from Natural Language Specification   In: 14th IEEE International Enterprise Distributed Object Computing Conference (EDOC 2010) 204-213 Vitoria, Brazil: IEEE CS (CPS)  
Abstract: Object Constraint Language (OCL) plays a key role in Unified Modeling Language (UML). In the UML standards, OCL is used for expressing constraints such as well-definedness criteria. In addition OCL can be used for specifying constraints on the models and pre/post conditions on operations, improving the precision of the specification. As a result, OCL has received considerable attention from the research community. However, despite its key role, there is a common consensus that OCL is the least adopted among all languages in the UML. It is often argued that, software practitioners shy away from OCL due to its unfamiliar syntax. To ensure better adoption of OCL, the usability issues related to producing OCL statement must be addressed. To address this problem, this paper aims to preset a method involving using Natural Language expressions and Model Transformation technology. The aim of the method is to produce a framework so that the user of UML tool can write constraints and pre/post conditions in English and the framework converts such natural language expressions to the equivalent OCL statements. As a result, the approach aims at simplifying the process of generation of OCL statements, allowing the user to benefit form the advantages provided by UML tools that support OCL. The suggested approach relies on Semantic Business Vocabulary and Rules (SBVR) to support formulation of natural language expressions and their transformations to OCL. The paper also presents outline of a prototype tool that implements the method.
Notes:
2009
Imran Sarwar Bajwa, Shahzad Mumtaz, Ali Samad, Rafaqut Kazmi, M Abbas Choudhary (2009)  BPM meeting with SOA: A Customized Solution for Small Business Enterprises   In: IEEE International Conference on Information management & Engineering- (ICIME 2009) 677-682 Kuala Lumpur, Malaysia:  
Abstract: This research paper presents a new pattern for the espousal of SOA in small enterprises for the service delivery beyond the typical role of SOA and BPM for dynamic endorsement of the processes beyond the ordinary border-lines. In comparison with large enterprises, the presented framework can expose the fact that the improvements of service orientation and the fruits of process management can be confined for small business enterprises as well. Service Oriented Architecture (SOA) and Business Process Management (BPM) in combination have been used for agility in services and dynamic process management for many years. The partnership of BPM and SOA has been fruitful by merging the benefits of both sides. These benefits are still being wallowed by the large sized enterprises, who can not only handle these large scale architectures using their vigorous work force but also have the enough budget to manage the ultimate expenses. In this paper, the key issues regarding the transformation of BPM and SOA partnership for small business enterprises has been elaborated.
Notes:
2008
Munsub Ali, M Shahid Naweed, Imran Sarwar Bajwa (2008)  RSR-ARQ Mechanism For Unreliable Data Communication In GPRS   In: IEEE International Networking and Communications Conference (INCC 2008) 112-117 Lahore, Pakistan: IEEE Press  
Abstract: In GPRS, severe jitter due to roughness (impairment) at the air interface between mobile station (MS) and base station (BS) produces adverse impact on the efficiency of the LLC (logical link control) and TCP (transmission control protocol) layers. Natural conditions at the air interface make communication reliability more vulnerable. Here, reliability refers to efficient recovery of the lost or erroneous data. On the other hand, delays in the recovery of the data at the air interface not only decreases the efficiency of the LLC and TCP layers but also increases the chances of buffer overflows. The proposed mechanism is enhancement of the RSR-ARQ protocol that ensures better efficiency and reliability of data. The RSR-ARQ supports channel impairment and buffer sensitive mechanism without affecting the reliability of the feedback communication. The system uses two state discrete time Markov channel (DTMC). The simulation results demonstrate the better recovery of the data and better efficiency for the LLC and TCP layers with minimum chances of buffer overflow.
Notes:
2007
Imran Sarwar Bajwa, S Irfan Hyder (2007)  UCD-Generator - A LESSA Application for Use Case Design   In: IEEE- International Conference on Information and Emerging Technologies (ICIET 2007) 182-187 Karachi, Pakistan: IEEE Press  
Abstract: In object-oriented design, use cases are one of the important but complex components to design. UCD-Generator is a LESSA based application that makes this process more simple and easy. UCD-Generator is an automated system that has vigorous ability to directly interact wit the user. This research paper presents a natural language processing based approach LESSA that is used for automatically understanding the natural language text and extract required information. This information is used to draw the use case diagrams. User writes his interface based preferences in simple English in a few paragraphs and the designed system has conspicuous ability to analyze the given script. After compound analysis and extraction of associated information, the designed system actually draws the use case class diagrams. Other conventional CASE tools require a complete understanding of the intended business scenario and lot of extra time and efforts from the system analyst during the process of creating, arranging, labeling and finishing the USE case diagrams. The designed system provides a quick and reliable way to generate use case diagrams to save the time and budget of both the user and system analyst
Notes:
2006
Imran Sarwar Bajwa, M Imran Siddique, M Abbas Choudhary (2006)  Automatic Domain Specific Terminology Extraction using a Decision Support System   In: 4th IEEE - International Conference on Information and Communication Technology (ICICT 2006) 651-659 Cairo, Egypt: IEEE Press  
Abstract: Speech languages or natural languages contents are major tools of communication. This research paper presents a natural language processing based automated system for understanding speech language text. A new rule based model is presented for analyzing the natural languages and extracting the relative meanings from the given text. User writes the natural language scenario in simple English in a few paragraphs and the designed system has an obvious capability of analyzing the given script by the user. After composite analysis and extraction of associated information, the designed system gives particular meanings to an assortment of speech language text on the basis of its context. The designed system uses standard speech language rules that are clearly defined for all speech languages as English, Urdu, Chinese, Arabic, French, ...etc. The designed system provides a quick and reliable way to comprehend speech language context and generates respective meanings. The application with such abilities can be more intelligent and pertinent specifically for the user to save the time.
Notes:
M Imran Siddique, Imran Sarwar Bajwa, M Shahid Naweed, M Abbas Choudhary (2006)  Automatic Functional Brain MR Image Segmentation Using Region Growing and Seed Pixel   In: IEEE – 4th International Conference on Information and Communication Technology (ICICT 2006) 589-602 Cairo, Egypt: IEEE Press  
Abstract: Magnetic resonance imaging (MRI) is used to visualize the anatomy and structure of a body organ for assistance in medical diagnostics of certain disease or conditions and to evaluate a particular disease. Magnetic resonance images of a specified anatomy are constructed by using radio waves, a magnetic field and compute. This technical paper demonstrates the segmentation process of brain MR images by using Region Growing and Seed Pixel methods. Segmentation is a noteworthy phase in the various image processing applications. Automatic Brain MRI Segmentation is a simple, robust and efficient image segmentation algorithm for classifying brain tissues form dual echo Magnetic Resonance (MR) Images. The designed system incorporates this robust ability of the described algorithm to segment the various parts of brain MR image automatically. The utilized algorithm consists of an assortment of components as adaptive histogram analysis, threshold, and region growing segmentation. These vigorous techniques are used for the sake of accurate categorization of assorted brain regions such as the brain white, gray matter, cerebrospinal fluid and ventricular regions. The orthodox techniques exploited for the analysis of a sequence of MR images was time consuming and inefficient. The conducted research minimizes this overhead by using the semi-automated designed system which has been tested successfully on multiple Dicom standard MRI real brain images.
Notes:
Imran Sarwar Bajwa, M Abbas Choudhary (2006)  A Study for Prediction of Minerals in Rock Images using Back Propagation Neural Networks   In: IEEE 1st International Conference on Advances in Space Technologies (ICAST 2006) 185-189 Islamabad, Pakistan:  
Abstract: This paper presents a novel approach for the segmentation of ground based images of rocks using back propagation neural network architecture. The designed system actually identifies the possible minerals by analyzing the surface color of the rocks. The rocks in Balochistan are very hard and defined. Such rocks are typically full of minerals. The rocks in the province of Balochistan are peculiar in their shape and surface colour. Usually, these colours are developed due to the reaction of the particles of the minerals with air. The upper layer of dust upon these rocks can be really useful in identifying the possible minerals concealing inside the rocks. The designed mechanism uses conventional artificial neural networks to identify various coloured parts of the rocks which are further classified into different minerals using histograms. The BPNN helps to learn to solve the task through a dynamic adaptation of its classification context. The designed system is trained by providing it the basic information related to the physical features of various mineral and types of rocks. The designed system highlights the various parts of the images by using various colours for various minerals
Notes:
Imran Sarwar Bajwa, M Imran Siddique, M Abbas Choudhary (2006)  Rule Based Production System for Automatic Code Generation in Java   In: 1st IEEE - International Conference on Digital Information Management (ICDIM 2006) Bangalore, India: IEEE Press  
Abstract: Unified modeling language is being used as a premier tool for modeling the user requirements. These CASE tools provide an easy way to get efficient solutions. This paper presents a natural language processing based automated system for generating code in multi-languages after modeling the user requirements based on UML. UML diagrams are first generated by analyzing the given business scenario provided by the user. A new model is presented for analyzing the natural languages and extracting the relative and required information from the given requirement notes by the user. User writes the requirements in simple English in a few paragraphs and the designed system has conspicuous ability to analyze the given script. After compound analysis and extraction of associated information, the designed system draws various UML diagrams as activity diagrams, sequence diagrams, class diagrams and uses cases diagrams. The designed system has robust ability to create code automatically without external environment. The designed system provides a quick and reliable way to generate UML diagrams and generate respective code to save the time and budget of both the user and system analyst.
Notes:
Imran Sarwar Bajwa, M Abbas Choudhary (2006)  Knowledge engineering from a web smart space using speech language processing techniques   In: Fifth International Conference on Information and Management Sciences (IMS 2006) 271-275 Changdu, China:  
Abstract: A smart space is commonplace, where the solution of the specific domain problem comes with intelligence. Web smart space is a model where, the data is stored and retrieved through intelligent queries. In the smart space scenario, space is permeated with high intelligence instead of being empty and inactive construct. Functionally, smart spaces can be more intelligent than human brains by involving human wit into AI based systems. Ontology defines that what actually exists in a domain and how the domain elements relate with each other. Information can intelligently and smartly be retrieved from a web smart space using ontology engineering. This paper presents an automated system for ontology based information retrieval from a web smart space using natural language processing techniques. Natural language scripts from web documents are read, understood and analyzed using a rule base method presented in this research. This type of intelligent searching can be useful in various business and commercial areas. The designed system provides a quick and reliable way to extract information from various web ontologies to save the time for effective and efficient solution.
Notes:
Imran Sarwar Bajwa, M Imran Siddique, M Abbas Choudhary (2006)  Web Layout Mining (WLM): A new Paradigm for Intelligent Web layout Design   In: 4th IEEE - International Conference on Information and Communication Technology (ICICT 2006) 639-650 Cairo, Egypt: IEEE Press  
Abstract: he problem in designing of modern Website projects is to produce contents according to the latest trends and styles. The common Website editors just help to draw the intended layouts but the problem is to design the accurate Web layout according to the demand and latest trends and style. This approach is useful when the user has a specific layout already in mind and is familiar with the Web page layout principles as to what kinds of layouts are possible. It is intrinsically difficult for particularly those who have limited artistic and creative abilities to design good layout from scratch which is acceptable in every respect. An automated system is required that has ability to mine the layouts of the desired type of Websites. The designed system for "Web layout mining (WLM)" helps to mine the most popular web-layouts from the Internet database and design a Web-layout that is near to acceptable and have all the marks and features of modern requirements. The designed system actually bases on a rule based algorithm which helps the user to search out some samples related to his Website category and afterwards the user himself chooses a desired Web-layout and designs its own one with proper implications and variations according to his own requirements.
Notes:
M Kashif Nazir, Imran Sarwar Bajwa, M Imran Khan (2006)  A Conceptual Framework of Earthquake Disaster Management System (EDMS) for Quetta City using GIS   In: 1st IEEE - International Conference on Advances in Space Technologies, (ICAST-2006) 117-120 Islamabad, Pakistan: IEEE Press  
Abstract: This paper, about earthquake-disaster management system of Quetta city using GIS, point up how the effects of earthquakes can be minimized. For this purpose it proposes two strategies: preparedness for when the disaster occurs and hazard mitigation and emergency response. The earthquake disaster preparedness plan is essentially a plan that identifies weaknesses and threats to the urban environment and proposes strategies to overcome these weaknesses. Earthquake disaster mitigation plan describes how to respond when a disaster occurs. GIS will be developed using risk mapping that is hazard's layers and typical or general layers. Typical layers include the layers typically necessary for developing GIS for a disaster for example service infrastructure, housing typologies, and critical emergency facilities, such as police and fire stations, and hospitals. Where as hazard's layers include seismic hazard layer, seismic micro zoning layer and risk categorization layer
Notes:
Imran Sarwar Bajwa, M Abbas Choudhary (2006)  A Language Engineering System for Graphical User Interface Design (LESGUID): A Rule based Approach   In: IEEE- 2nd International Conferences on Information & Communication Technologies: from Theory to Applications, (ICTTA 2006) 1307-1309 Damascus, Syria:  
Abstract: User interface is a way by which a user communicates with a computer through a particular software application. It is the physical means of communication between a person and a software program or operating system. Often, a user interface is composed of some common methods for communication as various ActiveX controls as command buttons, menus, icons, etc. User interface design is an integral part of software engineering process. Conventional coding schemes require a lot of time and effort from a programmer. The process of design and coding of user interface can be simplified by generating user interface automatically. A user interface can be designed automatically on the basis of the design scenario in the form of text provided by the programmer. A new model is presented in this research for analyzing the given text and extracting the relative and required information from the given guideline notes provided by the programmer. After compound analysis and extraction of associated information, the designed system has ability to generate the graphical user interface. The designed system provides a quick and reliable way to generate a graphical user interface to save the time and budget of both the user and system analyst
Notes:
2005
Imran Sarwar Bajwa, S Irfan Hyder (2005)  PCA based Image classification of Single Layered Cloud Types   In: IEEE- International Conference on Emerging Technologies, (ICET 2005) 365-369 Islamabad Pakistan: IEEE Press  
Abstract: The paper presents an automatic classification system, which discriminates the different types of single-layered clouds using principal component analysis (PCA) with enhanced accuracy as compared to other techniques. PCA is an image classification technique typically used for face recognition. Principal components are the distinctive or peculiar features of an image. The approach described in this paper uses this PCA capability for enhancing the accuracy of cloud image analysis. To demonstrate this enhancement, a software classifier system has been developed that incorporates PCA capability for better discrimination of cloud images. The system is first trained using cloud images. In training phase, system reads major principal features of the different cloud images to produce an image space. In testing phase, a new cloud image can be classified by comparing it with the specified image space using the PCA algorithm.
Notes:

Masters theses

2005
Imran Sarwar Bajwa, S Irfan Hyder (2005)  Cloud Types Classification Using PCA   PAF-Karachi Institute of Economics and Technology, 28-P.E.C.H.S, Karachi, Pakistan:  
Abstract: n automatic classification system is presented, which discriminates the different types of single-layered clouds using Principal Component Analysis (PCA) with enhanced accuracy and provides fast processing speed as compared to other techniques. PCA is an image classification technique, which is typically used for face recognition. PCA can be used to identify the image features called principal components. A principal component is a peculiar feature of an image. The approach described in report uses this PCA capability for enhancing the accuracy of cloud image analysis. To demonstrate this enhancement, a software classifier system has been developed that incorporates PCA capability for better discrimination of cloud images. The system is first trained by cloud images. In training phase, system reads major principal features of the different cloud images to produce an image space. In testing phase, a new cloud image can be classified by comparing it with the specified image space using the PCA algorithm. Weather forecasting applications use various pattern recognition techniques to analyze clouds’ information and other meteorological parameters. Neural Networks is an often-used methodology for image processing. Some statistical methodologies like FDA, RBFNN and SVM are also being used for image analysis. These methodologies require more training time and have limited accuracy of about 70%. This level of accuracy often degrades classification of clouds, and hence the accuracy of rain and other weather predictions is reduced. Better accuracy in cloud classification means accurate categorization of clouds according to high, mid and low levels. These high, mid and low-level clouds are further classified in their particular sub classes. PCA can easily handle a large amount of data due to its capability of reducing data dimensionality and complexity, thus getting better results. PCA algorithm provides a more accurate cloud classification that yield better and concise forecasting of rain.
Notes:
2003
Imran Sarwar Bajwa, Shahzad Ali (2003)  UML-Generator: A Tool of Generating UML Diagrams from NL Specification   The Islamia University of Bahawalpur, Bahawalpur, Pakistan:  
Abstract: The help provided by CASE tools in the development of software systems is very important. These tools are evolving by integrating new ways of making the job of the software engineer easier. However, manual translation of natural language (NL) requirements specification to graphical software models such as Unified Modelling Language (UML) is a complex and time-consuming task especially for novel users. This thesis intends to study and develop a module for a CASE tool that automatically translates natural language text into UML modelling diagrams typically used in software modelling. This translation is a complex and challenging task due to inherent ambiguities of NL such as English. We propose a Rule-based approach to translate natural language requirements specifications to diagrams. Our approach enables the system to adapt to the user vocabulary and the way that s/he models software systems. The developed techniques can be employed for various applications like document summarization, software analysis and modelling, database queries generation, etc. The presented approach works as a user writes the requirements in simple English in a few paragraphs and the designed system has conspicuous ability to analyze the given script. After compound analysis and extraction of associated information, the designed system draws various UML diagrams as activity diagrams, sequence diagrams, class diagrams and Uses cases diagrams. The designed system has robust ability to create code automatically without external environment. The designed system provides a quick and reliable way to generate UML diagrams and generate respective code to save the time and budget of both the user and system analyst.
Notes:

Technical reports

2011
2010
2004
Powered by PublicationsList.org.