ete_banner
btl btr

Abstract/Project Details

Session 2014-15
TITLE “TEXT EXTRACTION FROM SPORT VIDEO”

PROJECTIE  NAME

MS. ASHWINI B. POTDUKHE, MS. ASHWINI G. SHINDE, MS. DHANSHRI A. KADU, MS. ANKITA P. INGOLE, MR. AMOL V. AKHARE

GUIDED BY

Prof. S. U. Balvir

ABSTRACT

We know that the video is consisting of sequence of images, text and audio. It is one of the sources for presenting the valuable information. Text data present in video contain useful information for automatic annotation, structuring, mining, indexing and retrieval of video. In traditional system scores, wickets, team name updated on website manually.The drawback of that system is it takes more time for updation of information on webside. Another drawback of that system is some operator or reveiver was needed to continuously watch the match, then they were compare the previous entered information on webside with the latest information that was displayed at the bottom one-tenth part of the video. If any difference find out, then only the latest information was updated on webside .In that system there is chances of errors as it is handeled by human. For storing the information related to sport and updating it on the webside requires database connection which introduced data management responsibility and at the same time connection problem.That’s why we propose a novel method of detecting video text regions containing player information and score in sports video. First, we identify key frames from video using the Color Histogram technique to minimize the number of video frames. Then, the key images  convert into gray images for the efficient text detection. We crop the text image regions in the gray image which contains the text information. Then we apply the canny edge detection algorithms for text edge detection. Using the OCR tool, the text region image is converted as ASCII text then we upload it on webside using XML.

 

TITLE “DEVELOPMENT OF WEB CRAWLER USING FP-GROWTH ALGORITHM”
PROJECTIE  NAME

Ankita C. Khanga, Ankita C. Ballewar, Dipika Channawar, Rahul P. Sawarkar

GUIDED BY

Prof. S. U. Balvir

ABSTRACT

Information on the web is increasing at an enormous speed. Web crawlers are a key component of web search engines, where they are used to collect the pages that are to be indexed. Broad web search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. A web crawler (also known as a robot or a spider) is a system for the bulk downloading of web pages. Web crawlers are used for a variety of purposes. Most prominently, they are one of the main components of web search engines, systems that assemble a corpus of web pages, index them, and allow users to issue queries against the index and find the web pages that match the queries. Web Crawler given a set of seed Uniform Resource Locators (URLs), a crawler downloads all the web pages addressed by the URLs, extracts the hyperlinks contained in the pages, and iteratively downloads the web pages addressed by these hyperlinks. The FP-growth algorithm is currently one of the fastest approaches to frequent item set mining. With the use of the FP-growth algorithm the searching get optimized and user get the result exactly in a short time.
Different users have different needs when they submit a query to a web search engine. Common search engines investigate the World Wide Web (WWW) and find many pages according to user query regardless the query submission. The aim of web search personalization is to tailor search results for that particular user. Personalization is achieved by using the user contexts across related search sessions. Current information retrieval systems do not provide results according to user’s individual needs and interests. Only a few information retrieval system like Bing and Google give this functionality. Consider an example, the term ‘virus’ has different meaning in different domains. In Biology, it means simple sub- icroscopic parasites of plants, animals, and bacteria that often cause disease, whereas, in Computers, it means a program or a piece of code that is loaded onto your computer without your knowledge and runs against your wishes. So depending on user’s domain, he would expect results from the same query. This shows that different users would prefer different results for the same query. This is where the need of personalization comes into picture.

 

TITLE “INFORMATION RETRIEVAL FROM WEB BASED ON USER’S PROFILE”

  PROJECTIE  NAME

Snehal R. Ghaturle, Ruchira R. Gawande, Ruchika S. Khedkar, Payal A. Admane, Vaibhav W. Mude

GUIDED BY

Prof. V. R. Palekar

ABSTRACT
 Web search engines are used to serve all users, regardless of the individual needs of any user. They return roughly the same results for the same query, regardless of the user’s real interest. The queries submitted to search engines tend to be short and ambiguous; they are not likely to be able to express the user’s precise needs. Personalization is an important research area that aims to resolve the ambiguity of query terms. To increase the relevance of search results, personalized search engine create user profiles to capture the user personal preferences and as such identify the actual goal of input query. And personalized profile is constructed for specifying the user profiling knowledge. This has become an important factor in daily usage as it improves the retrieval effectiveness on topics that the user would look for.
A good user profiling strategy is an essential and fundamental component in search engine personalization. The existing technology had several drawbacks like creation of single profile to all users and considers only the positive preferences. To overcome this problem, this project makes use of profiling concepts to improve the search results that are capable of deriving both of the user’s positive and negative preferences.
Keywords: Personalization, search engine personalization, search engine, personalized query clustering, user profiling,


TITLE “VISUAL CONTENT BASED IMAGE RETRIEVAL USING RT AND EHD”
PROJECTIE  NAME

Amruta Kadarlawar, Amar Thool, Krunal Dahake, Neha Gosavi, Swapnaja Palkar

GUIDED BY

Prof. V. R. Palekar

 ABSTRACT

Content-Based Image Retrieval (CBIR) approach allows the user to extract an image from a huge database based upon a query. An efficient and effective retrieval performance is achieved by choosing the best transform and classification techniques. However, the current transform techniques such as Fourier Transform, Cosine Transform, and Wavelet Transform suffer from discontinuities such as edges in images. To overcome this problem, a recent Ripplet Transform technique called Ripplet Transform (RT) has been implemented.
An efficient algorithm for Content Based Image Retrieval (CBIR) based on Ripplet Transform and Edge Histogram Descriptor (EHD) feature of MPEG-7. The proposed algorithm is designed for image retrieval based on shape and texture features only not on the basis of colour information. Here input image is first decomposed into wavelet coefficients. These wavelet coefficients give mainly horizontal, vertical and diagonal features in the image. After wavelet transform, Edge Histogram Descriptor is then used on selected wavelet coefficients to gather the information of dominant edge orientations. The combination of and EHD techniques increases the performance of image retrieval system for shape and texture based search. The performance of various wavelets is also compared to find the suitability of particular wavelet function for image retrieval.

 

TITLE “ENHANCED MODERN ENCRYPTION STANDARD”
PROJECTIE  NAME

Aruna P. Deotale, Rashmi D. Matre, Komal A. Nimbalkar, Sonali G. Lambat, Suyog B. Borkute   

GUIDED BY

 Prof. P. V. Bhagat

ABSTRACT

In the present world we need a high security for transmitting any digital information from one client to another client or one machine to another machine. In the present work, we are focusing on how one can achieve high order data security while transmitting from one place to another place. To achieve high order data security we propose a new encryption standard (algorithm), which is the amalgamation of two different encryption algorithms developed by Nath et al. namely TTJSA and DJSA in randomized fashion.
The proposed method is known as Modern Encryption Standard version-I (MES ver-I) and, the method is achieved by splitting the file into four parts, which is to be encrypted, and encrypting the split sections of the file in various ways using TTJSA and DJSA cipher methods. Multiple keys are given for encryption and decryption to achieve high order security.
The primary idea behind the implementation of MES ver-I is to build a strong encryption method, which should be unbreakable by any kind of brute force method or differential attack.

 

TITLE “AUTOMATED ATTENDANCE MONITORING SYSTEM USING FACE RECOGNITION”
PROJECTIE  NAME

Jeba Baig, Payal Bharane, Surabhi Deshmukh, Yugandhara Bhoge, Tarun Chauhan

  GUIDED BY

Prof. J. R. Yadav

 ABSTRACT

In existing system, student attendance is taken manually which is a time consuming process. Moreover, it is very difficult to verify one by one student in a large classroom whether the authenticated students are actually responding or not.
The proposed system describes a method for automated Student’s Attendance monitoring which will integrate with the face detection technology using Viola Jones Object detection and face recognition technology using PCA algorithm (Principle Component Analysis).
In proposed system, camera monitors the image and captured it then that detected image is proceed for recognition. It recognizes the images of student’s face, which have been registered manually with their names and ID in the database. The clicked image is then compared with the training data set stored in the database. On this basis, attendance is marked.
The project demonstrates how face recognition can be used for an efficient attendance system to automatically record the presence of an enrolled individual within the respective venue. Proposed system also maintains a log file to keep records of the entry of every individual with respect to a universal system time.

 

TITLE “AUTOMATIC TOLL COLLECTION IN INDIAN CONDITIONS”
PROJECTIE  NAME

Komal Chaudhari, Shital Borghare, Aditya Bhosale, Chaitali Virkhede, Nilima Umate

  GUIDED BY

Prof. J. R. Yadav

ABSTRACT

In this project, the system will examine the  captured image of the vehicle’s number plate Also, information will be retrieved & processed from the captured image through the toll collection system. On any toll booth, the vehicle has to stop for paying the toll. We are trying to develop a system that would reduce the human efforts, traffic, time consumption, etc at the toll booth & will facilitate the smooth conduction of toll booth.
In this system camera is used for capturing the image of the vehicle number plate. The captured image would be converted into the text using ANPR. and the toll would be debit from the customer’s account and then open the gate.
Moreover in proposed system if a vehicle is stolen and an entry is being made in the central database by the police then if the vehicle passes through the toll both then silent alarm would buzz which would indicate the operator at the toll booth that the vehicle is a stolen vehicle. For the identification of the vehicles, the information of the vehicles is already stored on the central database. So captured number will be sent to the server received at the toll.


TITLE “SECURED DATA TRANSMISSION FOR WIRELESS NETWORK”
PROJECTIE  NAME

Komal Y. Gawande, Dipashri D. Bhende, Neha M. Nandagawali, Lina L. Nikhare, Monali P. Khobragade, Bharti S. Sayare

GUIDED BY

Prof. K. S. Satpute

ABSTRACT

Now a day while data transmission, security is one of the major issues for data transmission over wireless networks. Existing system utilizes security algorithm for providing secure data transmission over networks but in our proposed system, security for data transmission can be achieved without any use of security algorithm.
Our proposed system will be utilizing 'mobility cluster head' instead of security algorithms for data transmission over wireless networks. 'Mobility Cluster Head' will contain the information of each node within a wireless network and if any unauthorized node will try to hack the information then 'Mobility Cluster Head' or Global Inspector(GI) will try to secure the configured network from the unidentified attackers.
Existing work on secure data transmission includes the designs of many security algorithms and system infrastructures. Proposed System will secure data transmission to dynamically route packets between each source and destination. For data transmission, two different protocols i.e. Ad-hoc On Demand Distance Vector (AODV) and Destination Sequence Distance Vector(DSDV) will be utilized which will maintain the routing table of the network. Also, we will try to detect which protocol will be more efficient for data transmission without use of any security algorithm over wireless network.


TITLE “EXTRACTION OF INFORMATIVE BLOCKS FROM WEB PAGES”
PROJECTIE  NAME

Asmita M. Shambharkar, Madhuri J. Orke , Rakesh M. Kohale , Shreyash G. Balbudhe, Sneha K. Bhoyar

GUIDED BY

Prof. R. M. Shete

ABSTRACT

A webpage generally contains data along with navigation panels, advertisements, copyright and privacy notices. Except data these other things does not contain any important information. These blocks can be called as non-informative blocks. As these blocks are non-informative, they can affect the result of web data mining. To avoid this it is important to separate the main data i.e. informative blocks and non-informative blocks from the web page. In a website these non-informative blocks are generally present in different web pages and have same format. Also the data contained in these blocks is also same. In case of informative blocks, data contained by the block and their format are different. We need a structure at site level to capture the same format of the blocks and the data present in the blocks. DOM Tree structure is available at page level. Many tools are available to construct a DOM Tree of a webpage. But DOM Tree structure is not useful at site level. So we need to construct a Site Style Tree (SST) for a website. After analyzing this SST we can identify which part of SST is informative and which is non-informative. There is no tool available to construct a style tree for a given website. This work aims at constructing a style tree for given website and separating informative and non-informative blocks from the website.


TITLE “IMPLEMENTATION OF CLUSTERING TO GET BEST DEALS TO YOUR NEAREST LOCATION USING GPS”

  PROJECTIE  NAME

Ankush G. Kubde, Anjali H. Toppo, Bhagyashri D. Bhongade,                     Rupali C. Satpute, Rutuja B. Shete

GUIDED BY

Prof. A. V. Saurkar

ABSTRACT

Now a day’s people don’t have much time to find the right thing for them and they don’t want to spend much time to know about any deal. For example, if  person go in one mall then he/she may miss the offers of the other malls. That is why, there is a need of application that will be with the people every time.
Mobile applications can be one of the best ways to keep consumers engaged with a brand as they are on the move. With the increase in demand for smartphones and efficiency of wireless networks, the demand for mobile applications has increased incredibly. Android is one of the most popular open source platforms that offers the developers full access to the framework API’s so as to build innovative applications.
The main aim of this project is to build an android application which named as Ollie, helps the users to find the best deals in a specified location. The main features provided by this application are: GPS (Global Positiong System) and Location Based Services, Geo-fencing, and Alert feature. In the geo-fence area, application shows the deals or offers which are available in the shops by using the K-means clustering algorithm. When the users enter in the Geo-Fencing area of the shop the application sends the notification to the user. If the deals get expired then it automatically deleted from the reminder as well as from the application.

 

TITLE “ONLINE CODE COMPILER AND STORAGE USING PRIVATE CLOUD ”
PROJECTIE  NAME

Nitin Naidu, Priyanka Surkar, Swapnil Gaikwad, Harsha Shende, Heena Timande

GUIDED BY

Prof. P. M. Gourshettiwar

ABSTRACT
 

Cloud computing builds on decades of research recently in networking web and software services.  An online compiler is being built for client so that client can easily write program, compile and debug it online. An online compiler using private cloud is a program that functions equivalently to an actual compiler but does not require that the actual compiler be installed or licensed on the machine on which it runs. There are several benefits that make networked software desirable.
A web-based application can be used remotely throughout any network connection. Any operating system can be used to access it, making it platform independent. There is no local installation or maintenance work necessary. Access can be controlled and limited if required by providing password to the ad-hoc network.


TITLE “RESILIENT IDENTITY CRIME DETECTION”

PROJECTIE  NAME

Rupali Kumbhalkar, Priyanka Bobde, Pragati Pazare, Rahul Akotkar

GUIDED BY

Prof. V. V. Bhujade

ABSTRACT

Identity crime is well known, prevalent, and costly and credit application fraud is a specific case of identity crime. The existing non-data mining detection system of business rules and scorecards and known fraud matching have limitations. To address these limitations and combat identity crime in real time, this project proposes a new multilayered detection system complemented with two additional layers: communal detection (CD) and spike detection (SD). CD finds real social relationships to reduce the suspicion score and SD is tamper resistant to synthetic social relationships. It is the white-list oriented approach on a fixed set of attributes. SD finds spikes in duplicates to increase the suspicion score, and is probe-resistant for attributes. It is the attribute-oriented approach on a variable-size set of attributes. Together, CD and SD can detect more types of attacks, better account for changing legal behavior, and remove the redundant attributes. Experiments were carried out on CD and SD with several million real credit applications. Results on the data support the hypothesis that successful credit application fraud patterns are sudden and exhibit sharp spikes in duplicates. Although this work is specific to credit application fraud detection, the concept of resilience, together with adaptively and quality data, are general to the design, implementation, and evaluation of all detection systems.

 

TITLE “A NOVEL APPROACH PROVIDING PRIVACY AND SECURITY IN OSN”
PROJECTIE  NAME

Pratiksha Deoghare, Neha Singh, Ashwini Wankhade, Sneha Choudhari, Pooja Rokade

GUIDED BY

Prof. A. V.Saurkar

 ABSTRACT

Privacy is one of the friction points that emerges when communications get mediated in Online Social Networks (OSNs). Different communities of computer science researchers have framed the ‘OSN privacy problem’ as one of surveillance, institutional or social privacy. In tackling these problems they have also treated them as if they were independent. We argue that the different privacy problems are entangled and that research on privacy in OSNs would benefit from a more holistic approach. In this article, we first provide an introduction to the surveillance and social privacy perspectives emphasizing the narratives that inform them, as well as their assumptions, goals and methods. We then juxtapose the differences between these two approaches in order to understand their complementary, and to identify potential integration challenges as well as research questions that so far have been left unanswered.

 
bbl bbr