Research & Publications

Early detection of Alzheimer's Disease using Local measure on Magnetic Resonance Images

Abstract:This Alzheimer's disease (AD) is the most common cause of dementia, which is a non curable after a certain stage. The nerve cells, which are very essential to carry messages in the brain, particularly those responsible for storing memories, slowly get damaged due to the formation of tangles and plaques made from protein fragments in damaged areas of the brain.

This paper focuses on early detection of Alzheimer’s disease using Magnetic Resonance imaging (MRI), so that effective medication is possible. The minute changes in the Gray matter (GM) and white is observed there in the MRI and the processing of this will be an aid to the expert for the correct diagnosis. The GM and WM are segmented from the image and the texture information is extracted using different variants of the Local Binary pattern (LBP). It is observed that the Local graph structure with the inclusion of grayscale information is a good descriptor for classifying AD and mild cognitive impairment (MCI). The accuracy of the classification is improved by the inclusion of the proper threshold in the formation of local Pattern.

Clinical Decision support system for Brain MR images

Abstract: Researchers have been developing computerized methods to help the medical expert in their diagnostic process. Most of these efforts have primarily focused to improve the effectiveness in single patient data, like computing a brain lesion size, Cobb angle for scoliosis diagnosis etc. The comparison of multiple patients, their pathologies, for improving diagnosis has not received much attention.The patient to patient comparison should especially improve the diagnosis of diseases that affect large number of patients.The neurology department can greatly benefit from such multiple patient comparison due to the diagnosis of neurodegenerative diseases from one patient data has limitations. In this work, the search and retrieval mechanism is applied on Magnetic Resonance brain images to answer following questions:

  1. Is it possible to retrieve similar anatomical images from the large image database using the content of the images without using any keyword like patient id or name?
  2. Are we able to make clinical decision support system that predicts the disease class of the query image by retrieving the n images nearest to the query image from the large pool of images with the predicted disease class?

Work done

We have developed feature descriptors called Modified Local Binary Pattern (MOD_LBP) and Modified Local Ternary Pattern as a feature descriptor for the retrieval. Since the boundary of different levels (the height to which it belongs in the brain 3D volume) is not clearly defined, the degree of belongingness of the images in a particular level can be defined using some fuzzy membership functions. The local descriptor will be extended to Fuzzy local descriptor in order to extract more discriminate information from the images. The results can be improved by incorporating some visual semantics into the system in the form of reweighting the features or by using some classifiers like Neural Network, Support Vector Machine etc. The system can further be extended to support diagnosis process of some specific disease like dementia disorder, myelination problems etc.

(This is in collaboration with Dr B Kannan from CUSAT,  Mr.Manesh T from Prince Sattam Bin University, KSA and Dr. RejiRajan Varghese, Radioilogist from Cooperative medical college Cochin)

Duplicate Record Detection in XML using AI Techniques

Abstract:Duplicate detection multiple representation of same entity. XML is widely used in almost all applications especially data in web. Dueto the wide usage of XML it is essential to identify duplicates in it. Various methods like normalization etc are used for duplicate detection in relational database but it cannot be employed in XML due to its complex structure. Detecting and eliminating duplicates correctly has become one of the challenging issues in the areas of places where data integration is performed. Many techniques have been emerged for detecting duplicates in both relational databases and XML data’s. By recognizing and eliminating duplicates in XML data could be the solution, for this a strategy based on Bayesian Network called XMLDup to detect duplicates and use machine learning algorithm like SVM, Bee, Bat algorithms for improving its efficiency and compare them to find out the most efficient method to find out duplicates in XML effectively.

A Genetic Optimized approach for Detecting XML Duplication using MBAT

Abstract:Duplicate means representing two real world objects to the same entity. Now XML is used for data transmission in web, presence of duplicates is the major problem that faced on XML mining. Due to the wide use of XML we have to identify the duplicates init that may reduce the quality of data. By recognizing and eliminating duplicates in XML data could be the solution. For this a strategy based on Bayesian Network called XMLDup to detect duplicates is currently used. Here introduce a new genetic based approach for xml duplicate detection and using MBAT, a swam intelligence algorithm for optimizing this for improving its efficiency which shows high performance as compared with XMPDup.

An Extended TDW Scheme by Word Mapping Technique

Abstract:In the recent years there is a large growth in web contents over the internet.The internet does not provide any standard mechanism for verification of web contents before hosting them in web servers, which cause to increase the near and exact duplicated contents over the internet fro heterogeneous sources. These duplicate contents can exist either intentional or accidental. The problem of finding near duplicate web pages has been a subject of research in the database and web-search communities for some years. Since most prevailing text mining methods adopted term-based approaches, they all suffer from the problems of word synonymy and large number of comparison. In this paper, we are going to deal with the detection of near and duplicate web pages detection by using term document weighting scheme, sentence level features and addressing the synonym detection. The existence of these near and duplicate web pages causes the problems that ranges from network band width utilization, storage cost, reduce the performance of search engines by duplicated content indexing, increase load on a remote host.

Web Page Categorization with Extended TDW Scheme

Abstract:The exponential growth of internet over the past decade has increased millions of web pages published on every subject. Internet provides only a medium for communication between the computer and for accessing online document over this network but not to organize this large amount of data. There are different subject based web directories like Open Directory Project’s (ODP) Directory Mozilla (DMOZ), Yahoo etc., these directories organize web pages in hierarchy. Due to the rapid growth of web pages the categorization demands the need of machine learning technique to automatically maintain the web page directory service. To assign a web page into a class the textual information in the page serves as a hint. Here we propose a method which uses an extended TDW scheme for feature representation and a naïve Bayesian to build the classification model. The web page categorization provides a wide range of advantages that ranges from knowledgebase construction, to improve the quality of web results, web content filtering, focused crawling etc

Near-Duplicate Web Page Detection by Enhanced TDW and simHash Technique

Abstract:Internet is one of the imperative explosion in communication and information retrieval. This massive development of web prompts host millions of web pages in heterogeneous platform. Due to the lack of a standard mechanism to guarantee the nonexistence of a web page before hosting them in the server leads to increases the near duplicate pages in the internet. These near duplicate contents can exist either by intentional or accidental. The issue of finding near-duplicate web pages has been a subject of research in the database and web-scan groups for a few years. Since most winning content mining strategies received term-based methodologies, they all experience an issues of word synonym and substantial number of comparison. In this paper we propose a method, which deal with the detection of near and duplicate web pages detection by using an extended term document weighting scheme, sentence level features and simHash technique. The existence of these near and duplicate web pages causes the problems that range from network band width utilization, storage cost, reduce the performance of search engines by duplicated content indexing and increase load on a remote host.

Near-Duplicate Web Page Detection by Enhanced TDW and simHash Technique

Abstract:Internet is one of the imperative explosion in communication and information retrieval. This massive development of web prompts host millions of web pages in heterogeneous platform. Due to the lack of a standard mechanism to guarantee the nonexistence of a web page before hosting them in the server leads to increases the near duplicate pages in the internet. These near duplicate contents can exist either by intentional or accidental. The issue of finding near-duplicate web pages has been a subject of research in the database and web-scan groups for a few years. Since most winning content mining strategies received term-based methodologies, they all experience an issues of word synonym and substantial number of comparison. In this paper we propose a method, which deal with the detection of near and duplicate web pages detection by using an extended term document weighting scheme, sentence level features and simHash technique. The existence of these near and duplicate web pages causes the problems that range from network band width utilization, storage cost, reduce the performance of search engines by duplicated content indexing and increase load on a remote host.

Load Balancing In Cloud: Workload Migration among Virtual Machines

Abstract:Day by day increase in online computation and migration to cloud has significantly increase the importance of load balancing. Load balancing facilitates uninterrupted availability of the services, and which further fulfills the SLA. Virtualization technique supports load balancing in cloud data centers through the process of virtual machine migration and dynamic resource scheduling. There are several methods to establish virtual machine migration and resource allocation. Through this process load balancing performance of the datacenter can be improved which in turn leads to the user satisfaction and by prompt response. The work explore the different methods currently available to achieve load balancing through virtual machine migration and dynamic resource allocation. It also simulates a strategy of workload migration using CloudSim for achieving load balancing. This strategy performs workload migration based on the resource utilization of virtual machines. The experimental results proved that this strategy reduce both the waiting time to start the execution as well as turnaround time of the total workload.

Region Incrementing Visual Cryptography Using Lazy Wavelet Transform

Abstract:Video steganography is the most widely used form of steganography. Steganography is an art of hiding the secret information inside digitally covered information. The hidden message can be text, image, speech or video and accordingly the cover can be chosen from either an image or a video. Here we perform steganography on videos and hide message in encrypted form, by this security is increased by two times. The message which is used here is shares which are produced by using region incrementing visual cryptography. The mostly used technique for hiding information in steganography is LSB (Least Significant Bit) steganography. But instead of simple LSB technique, here we will use Lazy Lifting Wavelet transform and then apply LSB in the subbands of the video that has been obtained. The proposed approach will utilize the video as well as audio component to hide message, in video component we will hide the encrypted message and in audio we hide the length, up
to which the message is hide in video, using LSB technique.So here we can improve the security of region incrementing visual cryptography by using this technique.

Digital Secret Sharing using XOR based region incrementing and lazy wavelet in video

Abstract:Secret sharing is the process of distributing a secret amongst a group of participants each of whom is allocated a share of the secret. This secret can be reconstructed only when a enough number of possibly different types of shares are combined together. Individual shares have no use here Steganography can applied on video files and hide the message in an encrypted format thus achieving a multiple cryptographic system. The most frequently used technique is Least Significant Bit steganography (LSB steganography). But instead of traditional LSB encoding a modified encoding technique which will first transform the video using a Lazy Lifting Wavelet transform and after that apply LSB in the sub-bands of the video that has been obtained.

Data Transformation Method For Discrimination Prevention Along With Privacy Preserving

Abstract:Datamining is mining useful information from huge dataset. We can classify a datamining system based on the type of knowledge mined. that is datamining system is classified based on the functionalities such as characterization, discrimination, association and correlation analysis, classification, Prediction. Discrimination is an important problem when considering legal and ethical aspects such unfairly treating based on their specific belonging group. discrimination means distinguishing, that is distinguishing the people based on their age, race, gender etc. Antidiscrimination techniques are used for preventing the dataset from discrimination. Discrimination can be classified into two direct and indirect discrimination. Direct discrimination means directly rejecting people on the basis of their age, gender etc. Indirect means rejecting people based on their background knowledge. In this paper, we discuss about how can prevent both direct and indirect discrimination on same time and α- protective Incognito.

CBIR of Brain MR Images using Histogram of Oriented Gradients and Local Binary Patterns: A Comparative Study

Abstract:Retrieval of similar images from large dataset of brain images across patients would help the experts in the decision diagnosis process of diseases. Generally, visual features such as color, shape and texture are used for the retrieval of similar images in Content-Based Image Retrieval (CBIR) process. In this paper, Histogram of Orient Gradients (HOGs) based feature extraction method is used to retrieve similar brain images from large image database. HOG, a shape feature extraction method is proven to be an effective descriptor for object recognition in general. It has been compared with the texture descriptor called Local binary pattern (LBP) and the results show that method outperforms the texture descriptor. The accuracy of the method is tested under different noise levels and intensity non-uniformity.

An Efficient CBIR of Brain MR Images Using Histogram of Oriented Gradients And Local Binary Pattern

Abstract:Retrieval of similar images from large dataset of brain images across patients would help the experts in the decision diagnosis process of diseases. Generally, visual features such as color, shape and texture are used for the retrieval of similar images in Content-Based Image Retrieval (CBIR) process. In this paper, Histogram of Orient Gradients (HOGs) based feature extraction method is used to retrieve similar brain images from large image database. HOG, a shape feature extraction method is proven to be an effective descriptor for object recognition in general. It has been compared with the texture descriptor called Local binary pattern (LBP) and the results show that method outperforms the texture descriptor. The accuracy of the method is tested under different noise levels and intensity non-uniformity.

Cheating Prevention Schemes for Visual Cryptography

Abstract:Visual cryptography is an encryption technique to encrypt a secret image into different shares such that stacking a sufficient number of shares reveals the secret image. Most of the previous research work on VC focuses on improving two parameters: pixel expansion and contrast. We considered the cheating problem in the visual cryptography scheme and investigate various cheating prevention schemes. During the reconstruction of the secret, one participant, called cheater, may release a false share. As a result a fake image will be revealed.

Security Enhanced Visual Cryptography Scheme with Cheating Prevention Ability

Abstract:Visual Cryptography (VC) is an encryption technique to encrypt a secret image into transparent shares such that stacking an enough number of shares reveals the secret image without any complex computation. In the existing visual cryptographic scheme the dealer or sender takes a secret image and encodes into shares. After encoding this shares are sent to participants. The receiver collects the shares and stack to get decoded secret image. here no verification is done. During share reconstruction phase the dishonest participant or dealer may submit fake shares instead of genuine shares. As a result fake image will be revealed. So effortless cheating is possible. To attain cheating prevention in VC a steganographic scheme is used to embed a secret message in each of the shares in random location during share generation phase called stego share. The security is enhanced by embedding each of these stego shares in cover work using LSB technique. At the time of recovery LSB extraction technique is required to decode the shares from cover and a message extraction technique is required to retrieve the text from share, which prevent cheating and check the originality of the share. And then use visual cryptography to reveal the original visual information by stacking the shares. The proposed visual cryptography scheme provides cheating prevention ability as well as improved security.

An Efficient Algorithm for Identification of Most Valuable Itemsets from WebTransaction Log Data

Abstract:Web Utility mining has recently been a bloomingtopic in the field of data mining and so is the web mining, animportant research topic in database technologies. Thus, theweb utility mining is effective in not only discovering thefrequent temporal web transactions & generating high utilityitemsets, but also identifying the profit of webpages. Forenhancing the web utility mining, this study proposes a mixedapproach to the techniques of web mining, temporal highutility itemsets& Onshelf utility mining algorithms, toprovide web designers and decision makers more useful and meaningful web information. In the two Phases of thealgorithm, we came out with the more efficient and moderntechniques of web & utility mining in order to yield excellentresults on web transactional databases. Mining most valuableitemsets from a transactional dataset refers to theidentification of the itemsets with high utility value as profits.Although there are various algorithms for identifying highutility itemsets, this improved algorithm is focused on onlineshopping transaction data. The other similar algorithmsproposed so far arise a problem that is they all generate largeset of candidate itemsets for Most Valuable Itemsets and alsorequire large number database scans. Generation of largenumber of item sets decreases the performance of mining withrespect to execution time and space requirement. This situationmay worse when database contains a large number oftransactions. In the proposed system, information of valuableitemsets are recorded in tree based data structure called UtilityPattern Tree which is a compact representation of items intransaction database. By the creation of Utility Pattern Tree,candidate itemsets are generated with only two scans of thedatabase. Recommended algorithms not only reduce a numberof candidate itemsets but also work efficiently when databasehas lots of long transactions.
 

A Comparative Study on Gene Selection and Classification Methods for the Cancer Subtypes Prediction

Abstract:Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. Interpreting gene expression data remains a difficult problem and an active research area due to their native nature of high dimensional low sample size. These issues poses great challenges to existing classification methods. Thus effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to improve treatment strategies. Small sample size remains a bottleneck to design suitable classifiers. Traditional supervised classifiers can only work with labeled data. On the other hand, a large number of microarray data that do not have adequate follow-up information are disregarded. Particular, the study report focus on the most used data mining techniques for gene selection and semi supervised cancer classification. In addition, it provides a general idea for future improvement in this field.

Prediction of Cancer Subtypes from Microarray Data through Kernelized Fuzzy Rough Set and Association Rule Based Classification

Abstract:Microarrays have now gone from obscurity to being almost ubiquitous in biological research.
At the same time, the statistical methodology for microarray analysis has progressed from simple visual assessments of results to novel algorithms for analyzing changes in expression profiles. Microarray cancer data, organized as samples versus genes fashion, are being exploited for the classification of tissue samples into benign and malignant or their subtypes. In this paper, we attempt a prediction scheme that combines kernalised fuzzy rough set (KFRS) method for feature (gene) selection with association rule based classification. Biomarkers are discovered employing three feature selection methods, including KFRS. The effectiveness of the proposed KFRS and association rule based classification combination on the microarray data sets is demonstrated, and the cancer biomarkers identified from miRNA data are reported.
To show the effectiveness of the proposed approach, we compare the performance of this technique with the Fuzzy Rough Set Attribute Reduction on Information Gain Ratio(FRS_GR), signal-to-noise ratio (SNR) and consistency based feature selection (CBFS) methods. Using four benchmark gene microarray datasets, we demonstrate experimentally that our proposed scheme can achieve significant empirical success and is biologically relevant for cancer diagnosis and drug discovery.

An Efficient Method for Internet Traffic Classification and Identification using Statistical Features

Abstract:Traffic Classification is a method of categorizing the computer network traffic based on various features observed passively in the traffic into a number of traffic classes. Due to the rapid increase of different Internet application behaviors’, raised the need to disguise the applications for filtering, accounting, advertising, network designing etc. Many traditional methods like port based, packets based and some alternate methods based on machine learning approaches have been used for the classification process. Proposed a new traffic classification scheme to utilize the information among the correlated traffic flows generated by an application. Discretized statistical features are extracted and are used to represent the traffic flows. The removal of irrelevant and redundant features from the feature set is done by Correlation based feature selection with high class-specific correlation and low inter correlation. For the classification process Naïve Bayes with Discretization is used. The proposed scheme is compared with three other Bayesian models. The experimental evaluation show that NBD outperforms the other methods even in the case of a small supervised training samples.

Web Usage Analysis and Web Bot Detection based on Outlier Detection

Abstract:For securing the ones network it is important to detect botnet. For improve the efficiency and accuracy in data mining use optimization techniques. One of the current application areas is outlier detection that has not been fully explored yet but has enormous potential. Web bots are type of outliers, They can be found in the web usage analysis process.Particle Swarm Optimization (PSO) based on Hierarchical method is used in this paper to detect web bots among genuine user requests. In proposed scenario deal with tuning parameters in PSO algorithm for selecting the process. It is necessary to set different strategies for each PSO parameter. These parameter selections should ne optimum. HPSO algorithm provides high accuracy, fast convergence results. Less computational time to execute this process is the main advantage.

Efficient Pattern-Based Query search in Text Documents

Abstract:Text mining is a technique that helps the user to find useful information from a large amount of digital text document. It is a challenging issue to find accurate knowledge (or features) in text documents to help users to find what they want. Most existing text mining methods adopted term-based approaches, but they all suffer from the problem of polysemy and synonymy. Text mining is a technique that helps the user to find useful information from a large amount of digital text document. It is a challenging issue to find accurate knowledge (or features) in text documents to help users to find what they want. Most existing text mining methods adopted term-based approaches, but they all suffer from the problem of polysemy and synonymy. The effective usage and updating of discovered patterns is still an open research issue. Pattern deploying and pattern evolving method has also been proposed in order to refine the patterns that help in improving the effectiveness of pattern discovery. There are two phases that we need to consider when we use pattern-based models in text mining: one is how to discover useful patterns from digital documents, and the other is how to utilize these mined patterns to improve the system’s performance. The new approach use pattern (or phrase)-based approaches which perform better in comparison studies than other term-based methods. It uses a pattern taxonomy model. In pattern taxonomy model, given documents are separated into different paragraphs.

Efficient Pattern-Based Query search in Text Documents

Abstract:Text mining is a technique that helps the user to find useful information from a large amount of digital text document. It is a challenging issue to find accurate knowledge (or features) in text documents to help users to find what they want. Most existing text mining methods adopted term-based approaches, but they all suffer from the problem of polysemy and synonymy. Text mining is a technique that helps the user to find useful information from a large amount of digital text document. It is a challenging issue to find accurate knowledge (or features) in text documents to help users to find what they want. Most existing text mining methods adopted term-based approaches, but they all suffer from the problem of polysemy and synonymy. The effective usage and updating of discovered patterns is still an open research issue. Pattern deploying and pattern evolving method has also been proposed in order to refine the patterns that help in improving the effectiveness of pattern discovery. There are two phases that we need to consider when we use pattern-based models in text mining: one is how to discover useful patterns from digital documents, and the other is how to utilize these mined patterns to improve the system’s performance. The new approach use pattern (or phrase)-based approaches which perform better in comparison studies than other term-based methods. It uses a pattern taxonomy model. In pattern taxonomy model, given documents are separated into different paragraphs.

A Review of Image Segmentation and Classification Techniques for Automatic Pap smear Screening

Abstract:Pap smear test has been widely used for detection of cervical cancer. However, the conventional Pap smear test has several shortcomings including: subjective nature (dependent on individual interpretation), low sensitivity (i.e. ability to detect abnormal changes) and the need for frequent retesting. There has a great effort to automate Pap smear test and it is one of the important fields of medical image processing. This paper reviews the segmentation and classification methods available in the literature related to cervical cell image analysis. Some segmentation techniques are applied on single cervical cell images. Other techniques are designed to use in single cell or overlapped or multiple cell images. Many classification schemes are proposed for automatic categorization of the cells into two classes: normal versus abnormal. The main aim of all these techniques is to build an automated Pap smear analysis system which analyses Pap smear slides in a short time without fatigue, providing consistent and objective classification results.

Cervical Cancer Detection through Automatic Segmentation and Classification of Pap smear Cells

Abstract:Pap smear test has been widely used for detection of cervical cancer. However, the conventional Pap smear test has several shortcomings including: subjective nature (dependent on individual interpretation), low sensitivity (i.e. ability to detect abnormal changes) and the need for frequent retesting. There has a great effort to automate Pap smear test and it is one of the important fields of medical image processing. So this paper proposes a method for automatic cervical cell segmentation and classification. A single cervical cell image is segmented into cytoplasm, nucleus and background using Radiating Gradient Vector Flow (RGVF) Snake. Herlev dataset consists of 7 cervical cell classes, i.e., superficial squamous, intermediate squamous, columnar, mild dysplasia, moderate dysplasia, severe dysplasia, and carcinoma in situ is considered. Different cellular and nuclei features are extracted for training the system. Dataset is tested on artificial neural networks (ANN) to classify seven different types of cells and to discriminate abnormal from normal cells.

Automated Cervical Cancer Detection through RGVF segmentation and SVM Classification

Abstract:Pap smear test has been broadly used for detection of cervical cancer. However, the conventional Pap smear test has several shortcomings including: subjective nature (dependent on individual interpretation), low sensitivity (i.e. ability to detect abnormal changes) and the need for frequent retesting. There has a great effort to automate Pap smear test and it is one of the critical fields of medical image processing. So this paper proposes a method for automatic cervical cancer detection using cervical cell segmentation and classification. A single cervical cell image is segmented into cytoplasm, nucleus and background using Radiating Gradient Vector Flow (RGVF) Snake. Herlev dataset consists of 7 cervical cell classes, i.e. superficial squamous, intermediate squamous, columnar, mild dysplasia, moderate dysplasia, severe dysplasia, and carcinoma in situ is considered. Different cellular and nuclei features are extracted for training the system. Dataset is tested on Support Vector Machine (SVM) and artificial neural networks (ANN) and Euclidean distance based system to classify seven different types of cells and to segregate abnormal from normal cells.

DENSITY BASED AND PARTITION BASED CLUSTERING OF UNCERTAIN DATA BASED ON KL-DIVERGENCE SIMILARITY MEASURE

Abstract:Data mining problems are significantly influenced by the uncertainty in data. Clustering certain data has been well studied in many field of data mining, but there is an only preliminary study in clustering uncertain data. Traditional clustering algorithms are mainly on geometric locations. So such methods will not able to find the similarity of uncertain objects that have different distribution and geometrically indistinguishable. In this paper we introduce a divergence method called KL-divergence for finding the similarity of uncertain objects. And this similarity is integrated into both density based and partition based clustering. And also we are comparing the accuracy level of both clustering methods using KL-divergence and using geometric distances as similarity measure and will find better and efficient method for clustering the uncertain objects.

Satellite Image Registration based on SURF and MI

Abstract:Registration of satellite imagery is key step for remote sensing applications like global change detection, image fusion, and feature classification. Manual registration is very time consuming and repetitive, an automatic way for image registration is needed. In this paper such a method for image registration is proposed which is fully automatic and computationally efficient unlike global registration in which control points are selected manually. There is a preregistration process and a fine-tuning process. The first stage includes feature selection and description using SURF and an outlier removal procedure using RANSAC. This gets the optimizer in the fine-tuning process a near optimal solution. Next, the fine-tuning process is implemented by the maximization of mutual information. The proposed scheme is tested on various remote sensing images taken at different situations (multispectral, multisensor, and multitemporal) with the affine transformation model. It is demonstrated experimentally that proposed scheme is fully automatic and much efficient than the global registration. SURF is mostly used algorithm as it is the fastest descriptor. This paper shows that by increasing the matching points, image registration can be accurately done.

Diverse Visual Cryptography Schemes: A Glimpse

Abstract:Visual cryptography scheme is a cryptographic process which allows visual information (e.g. images, printed text, and handwritten notes) to be encrypted in such a way that the decryption can be performed by the human visual system, without the help of computers. There are diverse visual cryptography schemes developed based on different factors like pixel expansion, meaningless or meaningful shares, contrast, security, type of secret image and the number of secret images encrypted. This paper discusses most of the visual cryptography schemes and the performance measures used to evaluate them.

Random Grid based Extended Visual Cryptography Schemes using OR and XOR Decryption

Abstract:Visual cryptography (VC) is a paradigm of cryptography which prevents a secret from being modified or destructed by using the notions of perfect cipher and can be easily decoded by the human visual system. It doesn’t require any complex cryptographic computations. This visual secret sharing scheme is developed by Moni Naor and Adi Shamir in 1994. In this scheme an image is divided into n shares, so that only someone with all k (k<n) shares can decode the secret, while k-1 shares will not reveal any information about the original secret image. But these initial schemes of visual cryptography suffer from many drawbacks such as: pixel expansion, share management difficulty, etc. So this paper discusses three visual cryptography schemes: OR-based (n, n) VCS, XOR-based (n, n) VCS and extended (n, n) VCS; using random grids. Since RG is used here all the three schemes were designed without pixel expansion. (n, n) extended VCS offers better share management without pixel expansion. This method also removes the restriction of using one cover images for all generated shares. The proposed (n, n) extended VCS provide meaningful share images with improved security and visual quality.

Random Grid based Visual Cryptography using a common share

Abstract:Visual cryptography (VC) is a paradigm of cryptography which prevents a secret from being modified or destructed by using the notions of perfect cipher and the secret can be reconstructed by the human visual system. It is a cryptographic technique which can encrypt visual information, such as images and text; and can decrypt the secret without a computer (stack operation). This visual secret sharing scheme is developed by Moni Naor and Adi Shamir in 1994. Most of the visual cryptography schemes suffer from many drawbacks such as: pixel expansion, share management difficulty, poor quality recovered image etc. So this paper discusses a visual cryptography scheme using random grids, where it uses a common share to transmit ‘n’ binary secret. The binary secret image is divided into two share images (random grids) as in (2, 2) visual cryptography scheme. Here it uses ‘n+1’ share images to transmit ‘n’ secrets and the extra share is common to all ‘n’ secrets. Since RG is used it creates shares without pixel expansion. This scheme can be viewed as a modified scheme of (2, 2) random grid based visual cryptography. One share is considered as a common share to all n secrets, so it makes efficient network bandwidth utilization.