Browsing by Author "T, Saranya"
Now showing 1 - 6 of 6
- Results Per Page
- Sort Options
Item BIG DATA(Sri Sankara Arts and Science College, 2018) T, Saranya; D, NivethaBig data is a term applied to data sets whose size or type is beyond the ability of traditional relational databases to capture, manage, and process the data with low-latency. And it has one or more of the following characteristics – high volume, high velocity, or high variety. Big data comes from sensors, devices, video/audio, networks, log files, transactional applications, web, and social media - much of it generated in real time and in a very large scale.Item DEEP LEARNING - A LITERATURE SURVEY(PSGR Krishnammal College for Women, 2018-09) T, Saranya; D, NivethaDeep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, learning can be supervised, semi-supervised or unsupervised. Deep learning models are vaguely inspired by information processing and communication patterns in biological nervous systems yet have various differences from the structural and functional properties of biological brains, which make them incompatibleItem AN EFFICIENT LEARNING OF CONSTRAINTS FOR SEMI-SUPERVISED CLUSTERING USING NEIGHBOUR CLUSTERING ALGORITHM(International Journal on Recent and Innovation Trends in Computing and Communication, 2014-12) T, Saranya; K, MaheswariData mining is the process of finding the previously unknown and potentially interesting patterns and relation in database. Data mining is the step in the knowledge discovery in database process (KDD). The structures that are the outcome of the data mining process must meet certain condition so that these can be considered as knowledge. These conditions are validity, understandability, utility, novelty, interestingness. Researcher identifies two fundamental goals of data mining: prediction and description. The proposed research work suggests the semi-supervised clustering problem where to know (with varying degree of certainty) that some sample pairs are (or are not) in the same class. A probabilistic model for semi-supervised clustering based on Shared Semi-supervised Neighbor clustering (SSNC) that provides a principled framework for incorporating supervision into prototype-based clustering. Semi-supervised clustering that combines the constraint-based and fitness-based approaches in a unified model. The proposed method first divides the Constraint- sensitive assignment of instances to clusters, where points are assigned to clusters so that the overall distortion of the points from the cluster centroids is minimized, while a minimum number of must-link and cannot-link constraints are violated. Experimental results across UCL Machine learning semi-supervised dataset results show that the proposed method has higher F-Measures than many existing Semi-Supervised Clustering methods.Item KNOWLEDGE DISCOVERY AND DATA MANAGEMENT USING GENERIC ALGORITHMS(Sri Ramakrishna College of Arts and Science for Women, 2019-01) T, Saranya; D, NivethaThe term Knowledge Discovery in Databases, or KDD for short, refers to the broad process of finding knowledge in data, and emphasizes the "high-level" application of particular data mining methods. It is of interest to researchers in machine learning pattern recognition, databases, statistics,artificial intelligence, knowledge acquisition for expert systems, and data visualization. The unifying goal of the KDD process is to extract knowledge from data in the context of large databases. It does this by using data mining methods (algorithms) to extract (identify) what is deemed knowledge, according to the specifications of measures and thresholds, using a database along with any required pre-processing, sub sampling, and transformations of that database.Item A STUDY ON PRECAUTION MEASURES FOR THE VULNERABILITIES IN CLOUD TECHNOLOGY(PSGR Krishnammal College for Women, 2015-01) T, Saranya; S, Mohana PriyaToday’s global village is getting equipped with more modern techniques. Various technologies have emerged and are being developed to fulfill the needs of the people. In this aspect, cloud computing technology has given its contribution over a large area especially in the field of networking and communication. Several companies have also developed their own search engines and other websites for communication where cloud computing plays a major role. However, some companies are facing some inconvenience in using this cloud technology which is mainly due to the threats and minute vulnerabilities. In this paper, we present an idea of reducing the problem of data breaches and denial of service which are contributing a major part in the usage of cloud as a worldwide technology. The proposed system will explain the usage of dual encryption technique and the technique of automated deletion of duplicated copies of the data present.Item A STUDY ON PRECAUTION MEASURES FOR THEVULNERABILITIES IN CLOUD TECHNOLOGY(PSGR Krishnammal College for Women, Coimbatore, 2015-01) S, MohanaPriya; T, SaranyaToday’s global village is getting equipped with more modern techniques. Various technologies have emerged and are being developed to fulfill the needs of the people. In this aspect, cloud computing technology has given its contribution over a large area especially in the field of networking and communication. Several companies have also developed their own search engines and other websites for communication where cloud computing plays a major role. However, some companies are facing some inconvenience in using this cloud technology which is mainly due to the threats and minute vulnerabilities. In this paper, we present an idea of reducing the problem of data breaches and denial of service which are contributing a major part in the usage of cloud as a worldwide technology. The proposed system will explain the usage of dual encryption technique and the technique of automated deletion of duplicated copies of the data present.