Computer Science

Transcription

Computer Science
Society for Science & the Public - Page
The Society
Science News
Student Science
Newsletter Sign Up
Listing of Intel ISEF
Project Abstracts
<Back to Search>
Create Adobe Acrobat (.pdf) Document
Intel ISEF Year of Abstracts: ALL
Category to Limit to: CS
Display ALL Abstracts
Order by ISEF Year + Finalist Last Name
2003 - CS313
MANIPULATING THE MATRIX
Rebecca F. Ashton, Ruben M. Guadiana
Silver High School, Silver City, New Mexico, USA
The purpose of this project is to create a Java application that will encrypt and decrypt a message using fundamental matrix multiplication. In order to encrypt a
message, the position(s) of a character within the message and its American Standard Code for Information Interchange (ASCII) value fill a character matrix.
This matrix is then multiplied by a random matrix created by incrementing a randomly generated seed by one integer each time it is entered into the matrix. The
seed is multiplied by 26 times the length of the matrix and attached to the end of the encrypted matrix. This process is repeated for every character present in
the message and the resulting matrices are saved to a data file, creating a secure key that can be passed safely from user to user. To decrypt this key, the
program imports the file and separates the matrices into the encrypted matrices. The seed is removed from each matrix, divided by 26 times the length, and
incremented by one integer to recreate the original random matrices. The inverse of the random matrix is calculated and multiplied by the encrypted character
matrix, returning the position(s) and ASCII value of the character confined in the original character matrix. After every matrix in the file has been decrypted, the
message is recreated and returned to the user. Matrix multiplication proved to be an effective mathematical model, as, when implemented in the cryptographic
computer program, data was successfully encrypted into a secure key and also correctly decrypted.
2003 - CS314
CLASH OF THE ALGORITHMS COMPLEX GAMING STRATEGIES SIMULATED WITH ARTIFICIAL INTELLIGENCE
Dietrich Ian Bachman, Charles James Brock, Noah Michael Shepard
APS Career Enrichment Center, Albuquerque, New Mexico, United States of America
Our primary objective is to apply various AI (artificial intelligence) algorithms to the game of Othello to determine which will become the best player. Our
secondary objective is to test each algorithm against human players. We have developed a minmax algorithm and two neural networks.<br><br>A minmax
algorithm searches all possible combinations of moves to a certain level, compares these possibilities, and finds the player’s most beneficial move. We also
used two neural networks with different training methods. Back propagation (back-prop) training adjusts parts of the network using mathematical formulae to
teach it to give desired output. Instead of directly training a network, genetic algorithm (GA) evolution creates a population of genomes which represent
networks, and applies evolutionary rules to the population. Networks battle each other in Othello games, and winning networks “reproduce” until a very powerful
network is evolved.<br><br>Against the other AI’s, the GA is the champion. It levels off in its wins over minmax, but continues to increase its wins against
back-prop. Back-prop was tested against minmax by determining the highest level of minmax that it could beat over time. There results fluctuated widely,
showing that back-prop did not learn correctly.<br><br>Results against human players are quite different. The champion GA network is defeated by very large
margins, and the trained back-prop player loses, but not by as much. The minmax usually defeats human players. This leads to our most important conclusion:
AI’s which learn each other’s strategies may not necessarily beat a less predictable human player.
Awards won at the 2003 ISEF
Team Second Award of $400 for each team member - IEEE Computer Society
2003 - CS052
A COMPUTERIZED ENUMERATION OF THE ALKANE SERIES
Kevin George Ballard
Academic Magnet High School, Charleston SC, USA
An algorithm and data structure were developed for the purpose of determining the individual structures of all the isomers of any alkane, a hydrocarbon
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
characterized by a non-looping branch structure. This was accomplished by cycling through possible structures, one at a time, and comparing different sections
of each structure against itself during the creation process. Each structure would either be deemed unique and recorded, or discarded, based on a set of
priority rules. The resulting computations for the number of isomers for alkanes with carbon contents of up to 18 match the literature exactly. However, after 18
carbons a small number of excess structures are returned. The algorithm’s efficiency is its greatest accomplishment, far surpassing that of previous work. It
uses only a minimal amount of memory, and exhibits a linear relationship between the number of isomers calculated and the computational time required.
Awards won at the 2003 ISEF
DuPont will additionally recognize Honorable Mention winners with an award of $500. - National Aeronautics and Space Administration
2003 - CS307
BEACON: ANALYTICAL INSTRUMENTATION SOFTWARE FOR IDENTIFYING FLUORESCENT OLIGONUCLEOTIDES USED IN SPECTRALLY
ENCODED MICROBEADS
David E. Bennett, Andrew G. Ascione, Aaron D. Schulman
Broadneck High School, Annapolis, Maryland, USA
PURPOSE - The purpose of the BEACON project is to develop software that can identify the exact combination of fluorescent oligonucleotides (Molecular
Beacons) in a solution. The software will be used to support the patent of a new process that uses Molecular Beacons to spectrally encode Microbeads used in
biological assays. The software will process data from a spectrofluorometer.<br><br>APPROACH AND ENGINEERING EXPERIMENT - BEACON was
developed by a project team consisting of a Science and Mathematics Expert, a Software Designer, and a Software Coder. The BEACON team used a topdown approach to create the software, utilizing requirements analysis, design, coding, integration, testing, and validation.<br><br>RESULT – BEACON is
written in the Matlab scripting language and has a Graphical User Interface. BEACON uses smoothing algorithms, identification and analysis of key derivative
points, and a reference table of melting temperatures and corresponding beacons. Processing is performed on real data arrays.<br><br>CONCLUSION - The
team has successfully developed instrumentation software capable of taking data from a spectrofluorometer and analyzing it to obtain the specific melting point
of a Molecular Beacon with a specific fluorophor and the corresponding beacon name. This software, combined with the new process of utilizing Molecular
Beacons to encode Microbeads will dramatically advance the field of biological testing. Utilizing current testing techniques, it is only possible to test for a small
number of genetic markers during a single test. With BEACON and the new process, simultaneous identification of hundreds of genetic markers during a single
test will be possible. <br><br>
Awards won at the 2003 ISEF
European Union Contest for Young Scientists - European Union Contest for Young Scientists
Team First Award of $500 for each team member - IEEE Computer Society
Intel ISEF Best of Category Award of $5,000 for Top First Place Winner - Team Projects - Presented by Science News
2003 - CS004
OPTIMIZING 802.11B WIRELESS SIGNALS
Prince Suresh Bhakta
West High School, Mankato, MN, USA
As the digital revolution spreads across the world, new implementations are being made. One area of advancement is in the wireless technology field. Many
unique standards exist for wireless technology, but the most universal is 802.11b or WiFi. It provides adequate speed at an affordable price, and is the most
commonly used. One of the main problems with wireless technology is distance. Many “out of reach” users cannot benefit from the maximum capabilities of the
802.11b standard. Getting internet or setting up large networks is difficult when you are restrained by physical distance. A solution needs to be developed.
Normal antennas need to be modified in order to maximize signal distance and reception. This new modified antenna should have the potential to receive
strong signals at far distances. This project hopes to develop on the 802.11b standard by making a signal search and optimization system. A frequency
analysis system was used as well as a new satellite antenna to amplify the signal. Both of these systems working together will result in an extension of the
802.11b standard which will benefit "out of reach" users. A node was accessed via a laptop computer. The laptop was connected to a modified receiving
antenna. Both the laptop and antenna were moved away from the transmitter varied distances. At each increment, signal strength and bandwidth were
checked. By testing several types of antenna modifications, this project hoped to help "out of reach" users. No one should be without the internet.
2003 - CS059
DEVELOPMENT OF A COMPUTER LANGUAGE FOR CHILDREN
Brittney Dyanne Black
Black Homeschool, Williston, ND, USA
As technology advances, the skills and tools needed to utilize that technology needs to be learned earlier and earlier. This project endeavors to accomplish this
by proving that a computer language can be developed and a computer program interpreter written which can teach basic computer-programming skills to
children.<br><br> A language syntax was developed which gives the programmer a variety of graphical functions. A computer program called “Bee-Basic” was
then written which interprets the programmer’s commands into a graphical presentation.<br><br> After the program was completed, three children ages eight,
ten, and twelve were given 20 minutes of instruction on how to write a program using Bee-Basic. Then each child was given 3 programming tasks. The level of
difficulty increased with each task. After each child completed his/her programs, the following elements of each program were evaluated and given a score from
1 to 10, with a score of 7 or above an acceptable level of skill:<br><br> 1)Syntax 2)Coordinate system 3)Logic flow 4)Programming time.<br><br> Overall the
children evaluated in this project did very well as they never scored below a 7.<br><br> By writing a program that only interprets certain commands, some
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
programming freedom was lost. However, by developing a computer language that is easier for children to learn, it allows them to learn some basics of
computer programming that would otherwise be inaccessible to them for years to come.<br><br> This project was successful since a computer language was
developed and a computer program written which can teach computer-programming skills to children. <br><br>
Awards won at the 2003 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
2003 - CS305
RANDOM NUMBER GENERATION AND CRYPTOGRAPHY
Omar B Bohsali, Greg S. Kitson, Vipul S. Tripathi
McLean High School, McLean, Virginia, USA
Every second, more than a thousand credit card numbers are submitted online, millions of computerized car-crash simulations generated, and thousands of
possibilities in war simulations chosen. Without truly random numbers, all of these events would be flawed.<br><br>The objective of this study was to discover
whether computers are capable of generating truly random numbers by testing pseudorandom number functions of C++, Java, Visual Basic, Perl, and
JavaScript. Analysis was conducted by recording how many times each number appeared. According to the property of uniform distribution, values must occur
equally within in a truly random set. Each set’s distance from optimal value of uniform distribution was measured thus the correctional value.<br><br>At the
conclusion of the experimental phase, it was found that numbers generated by computers are not random. They are pseudorandom, meaning near random.
This was proven by the irregular distribution shown in every set generated by a computer. JavaScript was the nearest to random language that was tested
because of its distance from uniform distribution, 4154.<br><br>SSL and almost all other methods of encryption are based on random numbers. Digits of a
credit card are manipulated using random numbers, so even if a hacker could intercept the transmission, the data would be useless because it would be in
encrypted form. The contribution of this project is to strengthen internet security by finding flaws in random number distribution schemes made by random
number generators.
2003 - CS301
A NOVEL ALGORITHM FOR CGI SMOOTHING A ROTATIONAL EQUILIBRIUM APPROACH
Yuriy Bronshteyn, Nikhil Mirchandani, Geoffrey Miller
duPont Manual High School, Louisville, KY, 40208
This paper introduces a highly efficient system for smoothing triangular and quadrilateral mesh - the structural foundation of computer-generated imagery
(CGI). Our algorithm effectively guarantees an overall improvement in mesh quality while using fewer system resources than optimization methods and having
an ease of implementation comparable to Laplacian smoothing. The algorithm rotates every node relative to each angle pair peripheral to the node such that
the size disparity between adjacent angles is minimized. With every pass, the algorithm brings the mesh system closer to rotational equilibrium. To verify this,
we implemented both algorithms (our method and the Laplacian method) using Delaunay triangulation in the C++ programming language and compared their
relative efficacies in smoothing given analytical surfaces. The dimensional quality of the mesh produced by our rotational algorithm was quantifiably better than
that of current Laplacian techniques. Specifically, the method outperformed Laplacian smoothing by eliminating the risk of generating inverted elements and
increasing the homology of element sizes (as demonstrated by a 91% increase in the minimum/maximum area ratio of mesh shapes after 10 passes of both
algorithms). Further, we proved conceptually that our algorithm incurs one thirtieth of the computational cost of optimization-based approaches, a theory that
was confirmed empirically through C++ implementation.
2003 - CS061
AN ETHOLOGICALLY BASED ADAPTABLE AI ARCHITECTURE
Scott Sheldon Buchanan
Cortland Christian Academy, Cortland, NY, United States of America
This project examines the subject of artificial learning and adaptation. The question that was studied concerns what type of AI architecture would allow an agent
to learn a maze rapidly and to adapt to changes in it dynamically. It was hypothesized that an architecture based on animal behavior would be ideal, due to
animals' proficiency in adapting to their environment. A review of the ethological literature was done to identify main concepts that could be applied to artificial
intelligence.<br><br>As a solution to this issue, a new AI architecture is presented in this project. Based on elements of behavioral theories by Thorndike and
Tolman, the architecture incorporates a variety of "behaviors." However, unlike so called behavior-based AI, the behaviors do not run in parallel in this
architecture. Instead, "drive" levels for the different behaviors are constantly evaluated, and the behavior with the highest drive level exclusively controls the
agent's actions.<br><br>The most complicated behavior is a reinforcement learning mechanism that uses two separate memory banks, a short-term bank and
a long-term bank. How much the memories affect the behavior's output is determined by three parameters. In an analysis of the architecture's performance,
these parameters were varied and the effects noted. The absolute performance of the architecture and learning mechanism was also analyzed.<br><br>From
the analysis, it was determined that this architecture is an excellent alternative to other common AI architectures. When the parameter values were set well, the
architecture was capable of rapidly learning and efficiently adapting.
Awards won at the 2003 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2003 - CS062
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
THE TELLTALE HEART OF DARKNESS -DIFFERENTIATING BETWEEN AUTHORS: BEYOND NAIVE BAYESIAN CLASSIFICATION
Brian Andrew Camley
William J. Palmer High School, Colorado Springs Colorado, United States
The technique of naive Bayesian classification is commonly used in a wide variety of applications, from research in NASA satellites to military applications to
medical diagnosis. Bayesian classification is remarkably effective in choosing between two different cases, but is significantly less accurate in multi-case
classification. The problem of author attribution is selected as a typical classification problem. A naïve Bayesian classifier was programmed in Perl, with sample
texts from mid-to-late 19th century authors obtained from Project Gutenberg. In this project, structure-based techniques were evaluated in comparison to the
typical naïve Bayesian classifier. A new classifier was created based upon the mathematical idea of a dot product. Both the dot product method and the
structural methods are inferior to the vocabulary-based naïve Bayesian classifier. However, the dot product serves as a remarkably effective prefilter. Using this
combination improves the accuracy of author recognition significantly, from 60% accuracy at 200 words of sample text to 90% accuracy in the same case. A
theoretical explanation for this improvement, based upon the lack of correlation between the two systems, is presented, and found to be a good predictor of the
increased accuracy. This combination system has a significant advantage over traditional systems, not only in increased accuracy but in that it is a linear
classifier, unlike other techniques, such as error-correcting output codes (ECOC) which may be non-linear. Additional work is presented showing the
effectiveness of this new classification system to other applications.
Awards won at the 2003 ISEF
Award of $500 - American Association for Artificial Intelligence
2003 - CS027
SMS LINK-INTERACTIVE: SECURED DATA TRANSMISSION AND MODIFICATION THROUGH SMS
Roy Vincent Luna Canseco
Philippine Science High School, Diliman, Quezon City, Philippines
SMS Link-Interactive is a system that facilitates the exchange of information between a central computer server and a remote cellular phone user, and also
allows remote modification of data in the computer through SMS. A cellular phone with a GSM modem is connected to the computer physically via a data cable.
<br><br>A program in the computer is notified whenever the cellular phone receives new messages. Processing of these new messages is done by first
authenticating the user’s identity, then interpreting the user’s request. <br><br>Information is then retrieved from a database in the computer and the
appropriate changes to it are made. A message containing the requested information and a list of changes done is then sent to the user through SMS.<br><br>
The processing time of the system was determined to be 1.3 seconds on the average. Accuracy tests proved that the program is able to ascertain the identity of
the user correctly, return the requested data and change the specified information.<br><br>As this system has several security features and makes use of a
central computer server connected to a cellular network, it can be used for confidential databases, such as patient’s records. Doctors can diagnose, change
medicine dosages and the like from a remote location by sending a request through SMS to the cellular phone connected to the server. <br><br>
2003 - CS058
SCHOOL CONTROL SYSTEM IP
Luis Renato Cantu Reyes
CETis 150, Apaseo el Alto, Guanajuato, Mexico
The students’ academic information of the DGETI high schools (Government dependency that’ll control and manage the technological high schools of Mexico)
is a laborious job in its handling, since this information has to be registered manually in pre-established formats, and later be captured in a computer that’ll
process this information to present different types of reports. These reports are addressed to the students, professors, office workers and directives, this is why
it’s important to implement the “School Control System IP” that’ll allow to manage that information in a quick and efficient way. The new aspects of “School
Control System IP” are: 1) Database Manager. It’s characterized as having a new different database format; it has high level security with the encryption
information and uses low disk space. 2) Web Pages Generation. This system generates Web pages in HTML format of the students’ school information. 3) Email reports. All reports generated by the system, will be ready to send by e-mail. 4) Import and export information. It has the new characteristic of importing and
exporting information to other systems that apply the same technology. The “School Control System IP” in general contains, 1) Information of the school. Here’s
all the school information as school data, specialties or studies, subjects and groups. 2) Utilities. This part includes tools as Backup Wizard, Logos and Users
administrator. 3) Students. The students’ information is managed such as the signing up process, control number assignment, grades, reports, drops,
modifications and the Web pages generation.
2003 - CS055
SENSOR FUSION FOR SENSOR-ASSISTED TELEOPERATION OF A MOBILE ROBOT
Stephen John Card
Baker School, Baker, FL, USA
The objective of my project was to determine the ability of a mobile robot to present environmental information remotely to a control operator to assist with
teleoperation of the platform. I chose this project because teleoperation is often the only effective method of remotely maneuvering a robot in an environment of
unknown structure.<br><br>To test my ideas, I constructed a simple mobile platform incorporating an acoustical sonar array of three transducers to create a
basic model of the robot's environment. The data from the onboard sonar array, a digital compass, and video from an onboard camera are transmitted
wirelessly to the operator to provide remote operation. Two simple tests were conducted to test the ability of the sensor system to assist the operator under
manual control and to guide the robot autonomously. The sensor data was transmitted back to the operator and logged for comparison to the actual
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
environment. In both tests, the robot performed the required actions without error. The data from the acoustical sonar array closely matched the true
environment in both tests and the autonomous routine performed its function.<br><br>From this project I gained a greater understanding of sensor fusion for
environmental modeling and autonomous navigation. I believe my research has shown that hobbyist grade components can be used to construct a robot that
can navigate its environment under operator control with greater ease than with video reference alone. I hope to continue work in this field focusing more
closely on fully autonomous navigation for mobile robots.
2003 - CS035
MINIMUM SPANNING TREES WITH GRAPHICAL USER INTERFACE, PHASE II
Christopher Lee Carter
Roy High School, Roy UT, 84067
The purpose of this project was to create a computer program for companies who wish to find minimum distances or costs between multiple points.<br><br> In
researching spanning trees, some different algorithms were found that applied. Prim and Dijkstra’s Algorithms were selected. While researching GUIs, many
items were found to let the user interact with the screen.<br><br> A program was created using Java with graphical user interface to give a user-friendly way to
create maps, which would calculate the most efficient path between points, and the best way to get from one point to all others.<br><br> The Java program
allows the user to choose between the two algorithms to determine the way the points are connected. With Prim’s, the computer displays a graph of a minimum
spanning tree. When Dijkstra’s is chosen a graph of a spanning tree with a selected starting point is displayed.<br><br> The program allows the user to create
their own map with the use of the mouse, buttons, and list boxes. The buttons are used to initiate the chosen algorithm and to add lines. The list boxes contain
the names of user-created points and are used to create lines as well as select the starting point for Dijkstra’s Algorithm. With the menu bar, they can save,
recall, and create maps as needed, as well as choose which algorithm to use.<br><br> Testing was conducted on all variables in the program to ensure the
correct answers are given at all times.<br><br>
2003 - CS007
PROGRAMMING A SOCIETY OF SUBJECTIVE ARTIFICIAL INTELLIGENCE ENTITIES ADAPTING IN REAL-TIME TO VARIOUS TYPES OF
GOVERNMENTS AND LIFESTYLES TO MODEL GENETIC DRIFT
Christopher Lee DeLeon
Central High School, Saint Joseph, MO, United States
Ever since philosophy and ethical behavior came to contradict one another, unverifiable hypotheses have been proposed advocating revolutionary changes in
government and lifestyle. Such hypotheses defy confirmation/negation through traditional observation because testing for such results would involve unethical
practices conducted on a human sampling over multiple generations. Modern home computers may have adequate processing power to simulate complex
interactions of subjective entities learning and adapting within a culture, and consequently a computer model could theoretically be used to test such
hypotheses while avoiding their inherent ethical complications. If such a computer model could be engineered with sufficient flexibility to simulate a variety of
governmental and cultural differences, the program would provide scientists with a means by which to experiment and gather theoretical data in the field of
eugenics.
Awards won at the 2003 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2003 - CS053
CAN A SOFTWARE PROGRAM BE DELELOPED THAT CAN ACCURAETLY CORRELATE MOUSE POINTER MOVEMENT TO THE HUMAN EYE?
Darash Gayesh Desai
Celebration School, Celebration, Florida, USA
Today’s increase in computer related work has lead to a wide variety of software programming to ease complications and increase productivity. Programs such
as Microsoft Word and Microsoft PowerPoint are just a few of the commercial software that exemplify this. With every innovative software program, a new level
of ease is reached in terms of work and productivity. However, wouldn’t work be easier if interaction with the computer was more natural? Speech recognition
software is already available, but what is the next step?<br><br> I chose my topic with this question in mind, and came up with a new, more natural system to
interact with one the most basic components of a computer system—the mouse pointer. This new system uses eye movements to interact with the pointer, and
was implemented by designing and writing computer software that could calculate where the user was looking and move the mouse there.<br><br> My
hypothesis was that the program could be written such that it yielded exceedingly accurate results. This hypothesis was proven correct as the final averages
were 96.25% and 100.00% in terms of technical and practical scores respectively.<br><br> The purpose of this project was to prove that moving the mouse
with human eyes was theoretically possible. Through proceeding with the experiment, this was proven true; yet this project can, and will be taken farther, so
that the current software program can be given real-time data collected through an input device, thus eliminating the need for direct user input.<br><br>
2003 - CS013
NICE GUYS DOMINATE - A COMPUTER SIMULATION STUDYING THE EFFECT OF DOMINANT-RECESSIVE GENETICS ON A POPULATION'S
BEHAVIORAL EVOLUTION
Michael Jason Diedrich
Century High School, Rochester, MN, United States of America
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
This project was a computer simulation studying the effect of dominant-recessive genetics on a population's behavioral evolution, using the three-player,
iterated Prisoner's Dilemma. The first part of the project was to evaluate what effect the addition of dominance had on the results. The hypothesis for the first
part was that there would be no statistically significant difference between a simulation without dominance and a simulation with dominance. To test this, fifty
trials each were run of the previous year's experiment, in which there was no dominance, and a "free-for-all" environment where both cooperation and defection
could exist as dominant and recessive genes. The hypothesis was supported; a Student's t-test showed no statistically significant difference between the two
environments. The second part of the project was to compare the results when cooperation and defection were separately made the dominant gene. The
hypothesis for the second part was that an environment in which cooperation is the dominant gene would evolve higher scores than an environment in which
defection is dominant. Fifty trials each were run of a cooperate-dominant environment and a defect-dominant environment. The hypothesis for the second part
was supported. The scores for the cooperate- dominant environment were higher than the scores for the defect-dominant environment. Also, it was noticed that
the range of scores for the cooperate- dominant environment was lower than the range of scores for both the defect-dominant environment and the "free-for-all"
environment, suggesting that the cooperate-dominant environment was the most consistent.
Awards won at the 2003 ISEF
Award of $500 - American Association for Artificial Intelligence
2003 - CS037
AGNOS ADJUSTABLE IDENTITY MASKING AND FILTERING SYSTEM
Balint Karoly Erdelyi
Premontratensian High School, Szombathely, Hungary
The Agnos use totally different philosophy about „security” and "defence” and It can provide shelter from attackers. The field of computer security is evolving at
a phenomenal rare. New services, new systems and new networking architectures continuously add new dimensions to the field and completely change
previously held assumptions. Agnos System is more than an easily firewall. It is an OpenBSD kernel module, which include a non-series IP protocol version 4
implementation, a packet filter, an imitate system and an IDS. The users can setup and configure the software with a userfriendly GUI.<br><br>With Agnos,
you can:<br><br>- configure your TCP/UDP/ICMP packets’ header, among others: sequence number, fragment bits, window size, TCP options, etc.<br><br>select an operating system, what you want imitate, and the system will change the required characteristic feature automatically.<br><br>- filter high and low
level protocol packets (interface, target and source IP/HOST, type of packet (ICMP/UDP/TCP), port, low-level protocol packets, such as HTTP, FTP, etc. and of
course add new rules manual or automatically<br><br>- protect the server against various of portscan, among others NULL, SYN, FIN scan<br><br>- You can
easily monitor security offensives and after the action the Agnos able to deny the attacker’s host/IP. <br><br>- You can configure the system and upgrade it
automatically (if you need it) from a central database server<br><br>
2003 - CS020
A NEW METHOD FOR 3-D OBJECT MODELING & OPTICAL DATA RECOGNITION
Robert Earl Eunice
Houston County High School, Warner Robins, Georgia, USA
The goal of this project was to develop a new, low-cost method for scanning three dimensional (3-D) objects and develop software that allows the computer to
recognize the scanned object. The researcher used relatively common electronics to construct a 3-D scanning device. These items consisted of an ordinary
pen-style laser pointer, a computer web camera, a stepper motor, motor controller, and ordinary computer. Software was then developed to interface with the
hardware and capture many images containing the projection of a laser line on an object. Several thousand images were captured and analyzed. 3-D
coordinates were then calculated using triangulation algorithms developed by the researcher. The coordinates were then displayed on screen in the form of a 3D model and the scanning process was completed. Each scan produces nearly 5GB of raw data, and over 1.2 million 3-D coordinates. The scanned points are
compared to a database of objects and analyzed for matching, using artificial intelligence algorithms and software developed by the researcher. Applications of
this new 3-D scanner are far ranging. Home users may use this device to scan objects such as a child’s first piece of pottery, or make 3-D visualizations of
objects to sell online. Business applications may include faxing models of 3-D objects to clients anywhere in the world, or generating AutoCAD drawings from 3D prototypes. Military and government applications may use an enlarged version of this technology to scan 3-D models of important historical icons in the event
that reconstruction of those structures is necessary.
Awards won at the 2003 ISEF
Award of $300 - Association for Computing Machinery
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
All expense-paid trip to attend the U.S. Space Camp in Huntsville, Alabama and a certificate - National Aeronautics and Space Administration
Honorable Mention Award - Optical Society of America
Grand Award of $1,000 - Patent and Trademark Office / U.S.Department of Commerce / Patent and Trademark Office Society
Honorable Mention - International Society for Optical Engineering
Award of $3000 in savings bonds, a Certificate of Achievement and a gold medallion. - U.S. Army
Second Award of $3,000 - U.S. Coast Guard Research and Development
2003 - CS018
COMPUTER SIMULATED ASEXUAL EVOLUTION USING A MUTATION DRIVEN ALGORITHM
Benjamin Alan Frison
Mills University Studies High School, Little Rock, Arkansas, United States
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
The goal of this project was to develop a computer simulated environment in which an asexually reproducing herbivore population evolved to survive with a
growing carnivore population. Mutation-driven genetic algorithms were used to randomly change elements of array based DNA. The DNA elements coded for
organism structures, producing organism functions in the simulated environment. It was proposed that a mutation probability at 1 out of 100 herbivores
reproduced (0.01) would be sufficient for the herbivores to evolve traits that allowed them to survive with the carnivores. <br><br> The environment was a
1000x1000x2 matrix of pointers to class “matter” objects, which in turn dynamically pointed to derived classes “plant”, “herbivore”, and “carnivore”. In
initialization, the bottom layer of the matrix pointed to approximately ½ plants and ½ base matter. Organisms were added to the environment and to a linked list.
Pointers to nodes on the linked list executed organism live functions, which enabled them to detect other creatures, run to and from other creatures, find food
and reproduce. The linked list was to be traversed 100+ times before the main function exited and evolution stopped.<br><br> Problems delayed the project,
but almost all runtime and compiler errors were corrected. The linked list could, at most, be traversed 50 times. Enough data was collected to support the
hypothesis. Notable evolutionary changes included herbivores evolving into omnivores, becoming bipedal, developing greater muscle ratios, and developing
cup eyes with variable lenses. With these evolutionary changes, the herbivore population would have survived.<br><br>
2003 - CS039
SIS.DOCTOR
Daniel Fuhr
Fundação Escola Tec Liberato Salzano Vieira da Cunha, Novo Hamburgo, RS, Brazil
Being well-informed, more than ever, has become a basic prerequisite to reach success, no matter the field. Having the largest amount of information about a
certain subject guarantees more accurate decision making, more coherent choices, better possibilities to reach the expected goal.<br><br>If doing the right
thing may seem important, for instance, in a school activity or in traffic, making the right decision when the subject is Medicine is more than important, it is
essential. The SIS.DOCTOR project tackles exactly this issue: making available information which is vital for the diagnosis of a patient in a dynamic way
through a computer network. Having access to the complete history of the patient’s previous laboratory tests, among other data, will certainly allow the doctor to
make a better diagnosis.<br><br>Being able to check the tests soon after they have been taken encompasses the possibility of an immediate hospitalization or
the requirement of further tests, for example. Thus there are advantages for the patient, who is treated faster and more effectively; and for the doctor, who has
access to a powerful tool for a better and more qualified diagnosis, besides being able to centralize all medical information about the patient.<br><br>The
SIS.DOCTOR project aims at, more than presenting itself as a modern database tool, being able to qualify the work developed by medical professionals,
helping them in the crucial task of making decisions which may save lives. <br><br>
2003 - CS006
BRINGING CHAOS TO ORDER CHAOTIC PARTICLE SIMULATION
Andrew Ryan Geiger
Pine View High School. Osprey, FL 34229, USA
Chaos is very difficult name quantitatively, since its definition is still qualitative from different points of view or situations (not to be too confused with entropy,
which actually has a unit of measure in chemistry.) To demonstrate chaos, I built a computer model that simulates particle physics. In the simulations,
numerical data is used to show a definite disparity between an ideal system and similar systems which have slightly different starting conditions, typically a
small shift in its starting position, and is not used to literally quantify the chaos. Instead, Big-O notation is used to show which systems, given sufficiently large
iterations of the simulated physics model, would appear chaotic sooner than another. Comparisons on the effects of precision, starting positions, velocities,
universal constants, mass, and different physics models were analyzed, yielding a plethora of conclusions and patterns.<br><br> Systems that are very
dependant on starting conditions always became chaotic when the starting conditions were changed. The inverse-square law model would appear random
when greatly deviated from the ideal set, but did so before the constant force model. Adding more deviation greatly accelerated the deviations of the particles,
and simply changeing the precision with which the computer iterated greatly changed the accuracy of the results, and the lesser the precision, the lesser
amount of time before the system became chaotic.
2003 - CS051
BRAIN-COMPUTER INTERFACE FOR THE MUSCULARLY DISABLED
Elena Leah Glassman
Central Bucks West High School, Doylestown PA, USA
Muscular disability may deny computer access. Brain-computer interfaces (BCIs) using non-invasive electroencephalographs (EEGs) may allow computer
access by substituting for keyboard and/or mouse.<br><br> BCIs should decode as much information from the EEG as possible. Different commands (e.g.,
imagined movement or colors) create patterns of microvolt signals across the scalp.<br><br> My system uses wavelet analysis, feature selection, and a
Support Vector Machine classifier to achieve satisfactory accuracy for BCIs, as tested on public-domain EEG datasets.<br><br> No single theory exists for
choosing a wavelet. A wavelet’s correspondence to and ability to compress EEGs may indicate its utility in this application. I developed a method to create
wavelets based on a sum of neuron action potentials designed for maximum correspondence with EEGs. Standard wavelets were chosen for further evaluation
based on EEG compression results.<br><br> I compared classification accuracy between the discrete (DWT) and the continuous (CWT) wavelet transforms.
Unlike the relatively inflexible DWT, the CWT can encode, at precise locations in frequency and time, the signals’ discriminatory features.<br><br> In the DWT
experiments, my EEG-specific wavelet achieved either roughly equivalent accuracy to the db4 wavelet but with far fewer coefficients or higher accuracy with
more coefficients. Mine may therefore be the best of the candidate wavelets, and due to its design, possibly best for higher-level mental tasks.<br><br> I am
currently developing a system in which continuous BCI data is simulated (through concatenation of available EEG segments) and processed using a sliding
window and a combination of binary classifiers, each optimized for a specific decision.<br><br>
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Awards won at the 2003 ISEF
Award of $500 - American Association for Artificial Intelligence
Third Award of $350 - IEEE Computer Society
A scholarship of $50,000, and a high-performance computer. - Intel Foundation Young Scientist Award
Intel ISEF Best of Category Award of $5,000 for Top First Place Winner - Computer Science - Presented by Intel Foundation
Special Achievement Award of $1,000 plus certificate - Caltech JPL
UTC Stock - United Technologies Corporation
2003 - CS041
DEVELOPMENT OF A BAYESIAN STATISTICAL SPAM FILTER
Jerry Ji Guo, Riverside High School
Greer, South Carolina, USA
E-mail is quickly becoming the fastest and most economical form of communication available. However, without appropriate counter-measures, spam
messages could eventually undermine the usability of e-mail. The engineering goal of this project was to develop a robust, adaptive, and highly accurate spam
filter.<br><br> Two corpora containing tokens and corresponding frequencies were compiled, one for spam, and one for legitimate e-mail through manual, onerror, and AI training of the filter. Sender address and domain were also extracted for whitelist processing. Individual token probabilities were calculated by
comparing frequency ratios between the corpora. The overall spam probability was calculated by combining the most interesting token probabilities in adapted
Bayesian algorithms. The program was implemented in Visual Basic and trials were conducted in order to collect data.<br><br> With 1,000 trained e-mail
divided evenly between the two corpora, the accuracy rate for filtering spam was 97.3% based on a test sample size of 1,000 e-mail. The false positive and
negative rates were 2.1% and 0.6%, respectively. Character n-grams did not perform as well as the word tokens. Word n-grams needed larger corpora.<br>
<br> Performance was optimized under certain conditions including stratified training with no lemmatiser or stop-list, calculations using the advance Bayesian
algorithm on word tokens and frequency, and whitelist processing. A typical user can implement such a spam filter with minimal maintenance and achieve very
similar results.<br><br>
Awards won at the 2003 ISEF
Award of $500 - American Association for Artificial Intelligence
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2003 - CS069
LIQUID COOLING AND YOUR PC
John R. Hartigan
Belleview Middle School, Belleview, Florida, USA
The purpose of the experiment was to see what type of liquid cooled a Computer Processing Unit the best. The hypothesis was that the methyl alcohol would
cool the CPU the best, because it combined alcohol’s good cooling properties and water’s thinness, making it easier to pump. After all of the materials were
gathered, the water-cooling system was setup and filled with a liquid. The system was then bled of air, and the computer turned on. The temperatures were
recorded at five-minute intervals, and after twenty minutes, the computer was shut off, and the whole process was repeated with a different liquid. In general,
the results in the experiment were not expected. Originally, the water-cooling system was supposed to keep the CPU temperature down in the low 30 degrees
Celsius. However, in the experiment, once the computer was powered on, the temperature of each liquid rose to 40 degrees Celsius and higher. This forced the
testing to last twenty minutes instead of three hours, because too much longer and the CPU would overheat. Only the methyl alcohol stayed at 41 degrees. The
isopropyl alcohol rose to 52 degrees where it was then shutoff, and the distilled water rose to 41 degrees steadily where its expired the time limit and the
computer was shut down. The distilled water would have followed the isopropyl alcohol’s path, and soon overheat the CPU. Overall, the hypothesis was proven
correct because the methyl alcohol did cool the best, but it did not cool like it should have.
2003 - CS033
MODELING THE ECOLOGICAL SUCCESS OF DECISION RULES USING VARIANTS OF THE ITERATED PRISONER'S DILEMMA
Katherine Harvard, Great Neck South High School
Great Neck, NY, USA
The Prisoner’s Dilemma is a simple program that presents the players with two choices: defect or cooperate. Based on their decisions, the players receive a
certain number of points. When the Prisoner’s Dilemma is iterated, or played repeatedly, strategies emerge among players that dictate their choice of moves. A
pool of multiple strategies can be constructed. Certain strategies can be successful or unsuccessful against different types of opponents. <br><br> A computer
model to simulate the ecological success of strategies, or decision rules, was created in three stages using the Java programming language. The first stage
involved the creation of a simple tournament program to run strategies against one another. The second stage involved the refinement of a specific strategy
called Tit For Tat, which exhibited several negative characteristics. The final program treated strategies as individuals and displayed what proportion of the
population each strategy made up based on an algorithm to determine this proportion.<br><br> The initial step successfully ran a tournament between the
strategies. The second step was also successful because the variant of Tit For Tat performed better in a secondary tournament than did the original. The main
and final step of the project, the ecological model, proved worthy as well. It was able to calculate the proportions of strategies and then display the strategies in
the population in a graphical output. The model was successful in mimicking an ecological environment under a number of variations, allowing for a clearer idea
of how and why cooperation emerges among inherently selfish individuals.<br><br>
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Awards won at the 2003 ISEF
Award of $500 - American Association for Artificial Intelligence
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2003 - CS026
UNBREAKABLE?
Jake Daniel Hertenstein
De Soto High School, De Soto Missouri, USA
This second-year engineering project is the result of developing a secure protocol to port the previously developed fractal encryption system to the Internet. The
original concept could only encrypt data in a local, symmetric key system. The development of a provably, unbreakable encrypted communication protocol
between a client and server results in preventing many conventional forms of attacks.<br><br>Packet transmission is secured by using a fractal as a symmetric
key. Under the principles of the Vernam encryption and Bayes’ Theorem, packets encrypted under this algorithm are unbreakable. The fractals produce a large
amount of random data to use as a One-Time Pad. The developed protocol ensures that at any given point during the packet transmission, the secured
payload is not vulnerable to cryptanalysis.<br><br>Unfortunately for Gilbert Vernam, the size of the key required to encrypt a message was greater than or
equal to the size of the message. This prevented the use of One-Time Pads commercially. However, fractals require only a few numbers to generate a large
amount of data. The developed protocol can be used efficiently on the Internet with secured servers which handle key distribution and agreement.<br>
<br>Bayes’ Theorem has been combined with the statistical randomness of fractal images to form the first practical implementation of One-Time Pads on the
Internet. The theorem proves that the encryption is unbreakable, provided that the key is only used once and is statistically random. The keys can be proven to
be random by statistical comparison of binary 1’s to 0’s.
2003 - CS312
A LIQUID-BASED THERMOELECTRIC APPLICATION FOR PROCESSOR ARCHITECTURE SCALABILITY
Larry Zhixing Hu, Matthew P. Meduna
UMS-Wright Preparatory School, Mobile, Alabama, USA
Silicon based processor cores have sparked the birth of microprocessors since the introduction of the 4004 processor in the early 1970’s. Moore’s Law states
that the number of transistors in a microprocessor core doubles each twenty-four months. Moore’s Law has generally been accurate in the past, however Intel
foresees Moore’s Law being invalid after the end of this decade. Scalability will become a paramount issue for semiconductors as they transition to new
manufacturing technologies. Should scalability be lost in the transitional stage to new methods of creating microprocessors, an immediate solution such as
enhanced cooling with thermoelectric and liquid based properties will help alleviate the problem.<br><br> In order to implement the cooling system, proper
preparation must be done. The pinholes in the socket will be filled with dielectric grease. Coating and foam will be used to cover crucial parts of the socket from
the effects of condensation. The processor will be “sandwiched” between it and the waterblock and thermoelectric unit combination. Threaded tubing will be
used to circulate water through the system consisting of a pump and reservoir combo, the waterblock, and the radiator.<br><br> After rigorous testing, the
liquid-based thermoelectric cooling system yielded a 50% increase in the operating clock speed of the processor with complete stability. This was done through
the manipulation of the system bus. These researchers conclude that should new technologies not emerge in time, and scalability becomes stagnant on current
process technology, enhanced cooling systems will serve as an option to immediately help counter the dilemma. <br><br>
Awards won at the 2003 ISEF
Third Award of $1,000 - Team Projects - Presented by Science News
2003 - CS066
A STUDY OF THE USE OF AUTOSTEREOGRAMS AS A FORM OF STEGANOGRAPHY
Joshua Larson Hurd
Hellgate High School, Missoula, MT, USA
The autostereogram, a two-dimensional method of producing a three-dimensional effect, was studied to evaluate its use as a form of steganography.
Steganography is hiding the existence of one message within another message. In this study, different methods of producing autostereograms were analyzed
to see if the original third-dimension depth information, which contained the message, could be accurately reconstructed. A method was found that could be
accurately reconstructed, but was not very concealable and steganographic, as it would just appear as a randomly colored, pixilated image. A method was then
created based on the previous method that constructed the autostereogram in reverse order, from a template image. Because the autostereogram could be
constructed from any existing image, the hidden message within the autostereogram would not appear as obvious as in the previous method; however, the
message within the autostereogram was still not able to be totally hidden. The method constructed the autostereograms by creating many corresponding pixels
of the same color, in which the data would be encoded by varying the distance between those corresponding pixels. The data would then be recovered by
calculating the distance between these pixels. Limitations were placed on the autostereogram to ensure that having more than two pixels of the same color
would generate no errors. This method, using a standard 256-color GIF image, would achieve a 4:9 compression ratio on average. Autostereograms can
indeed be used as a form of steganography; however, the current methods of construction are not totally concealable.
2003 - CS005
TERNARY VS. BINARY
Natasha Rustom Irani
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Lake Highland Preparatory School, Orlando, FL, USA
Computers are used for two general purposes: information retrieval and mathematical calculations. Modern computers have been built to run in binary (base 2).
This experiment is aimed at giving designers another tool, ternary computing (base 3), that can be used for solving problems. What gives ternary systems their
unique place is the fact that they can represent much bigger numbers with fewer digits.<br><br> For mathematical calculations the numbers have to be
accurate; however, the difficulty lies in the ability to represent numbers analog in nature by quantifying them in discrete units. In order to improve the quality of
the calculation, more bits are used so that “quantization error” diminishes; however, the computer system becomes more expensive. Also, higher order A/D and
D/A chips occupy more space because they have more pins, and the quality required of a system may cause a designer to add more bits than the natural size
of the microprocessor. Using “trits”, on the other hand, allows for more free space, or, if desired, more power in the same amount of space. Furthermore, since
size and cost are critical factors in technology today, this project’s applications could potentially affect every facet of the modern world, from industries such as
NASA to everyday people.<br><br> Through creating both a hardware and software portion of a ternary simulation, it can clearly be seen that ternary
computing is more efficient than binary. Since ternary computing has not been successfully simulated in hardware, this could easily be the next great
breakthrough in computing technology.<br><br>
Awards won at the 2003 ISEF
Full-tuition scholarships - Drexel University
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
Award of $500 - Schlumberger Excellence in Educational Development
First Awards of $3,000 - U.S. Air Force
2003 - CS031
SKELETAL CHARACTER ANIMATION IN A VIRTUAL REALITY LEARNING ENVIRONMENT
Arya Mohammad Iranmanesh
Roanoke Valley Governor's School, Roanoke, VA, USA
Ignited by the onset of the personal computing era, technological advancements have commenced an age dominated by the bit. It has recently become
possible to efficiently educate students using a system which is at its core computer-based. Total immersion is critical to the success of such an application;
hence, there arises a need for accurate mathematical simulations in terms of physical interactions, collision detection, character movement, lighting, and the
general detail of the environment.<br><br> The presented application incorporates the most advanced technology available to the consumer market to create a
highly realistic learning environment having the flexibility to convey information and conduct demonstrations concerning virtually any subject matter. As the
basis for the application there lies a highly modulated foundation formed through the use of C++, DirectX, and the Windows API. Furthermore, using x86
assembly and C, REASCRIPT (a computer programming language) and corresponding virtual machine have been written to support simple scripting of engine
actions. Complex mathematical models construct the core of the application; most notably, an advanced system of “bones” represented by four-dimensional
rotations (quaternions) is used to perform real-time mesh deformation, resulting in highly realistic animation. The system is implemented through vertex
shaders, essentially short assembly routines programmed specifically for the graphical processing unit (GPU). Also implemented through the vertex shaders
are matrix based view and perspective transformations, which deform the environment according to the calculated perspective of the user.<br><br> The project
lays the framework for highly interactive educational applications. Through such an application, a professor may with relative ease script a highly visual lesson
involving accurate computer simulations of desired topics. Thus arises a era of new possibility involving the integration of microprocessor technology into
education. <br><br>
Awards won at the 2003 ISEF
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
2003 - CS025
TENSORFACES: A MULTILINEAR MODEL FOR COMPUTERIZED FACE RECOGNITION AND IMAGE PROCESSING
Tonislav Ivanov Ivanov
Stuyvesant High School, New York NY, USA
Humans have a remarkable ability to recognize faces of known people in unfamiliar situations. However, constructing a robust computer model for recognizing
faces is a challenging problem. This is because images of the same individual acquired under different lighting conditions and from different angles appear
dissimilar. Facial images are the composite consequence of several factors such as face geometry, viewpoint, and illumination. By applying multilinear algebra,
the algebra of high-order tensors, one can separate the different factors that make up a facial image, thus creating a face representation and system invariant
to different imaging conditions. This model, called TensorFaces, results in improved recognition rates over the standard linear “eigenfaces” model. This makes
it a promising tool for a computerized face recognition system. Moreover, TensorFaces model enables one to synthesize new images of a person under
different imaging conditions. It also lets one reduce specific image effects, such as cast shadows. TensorFaces can be applied to photography enhancement,
human-computer interaction, intelligent environments, secure access, surveillance for national security, and other applications.
Awards won at the 2003 ISEF
Award of $500 - American Association for Artificial Intelligence
First Award of $1,000 - Eastman Kodak Company
Awards of $5,000 - Intel Foundation Achievement Awards
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2003 - CS038
A STUDY OF ELECTRONIC CONTROL METHODOLOGY
Taylor Thomas Johnson, Brenham High School, Brenham
Texas, USA
This project, “A Study of Electronic Control Methodology”, is a comparison of electronic control methods—processes that control circuitry, computers, and
general electronics. The purpose of this project is to determine the differences and advantages of various electronic control methods—analog control, Boolean
logic, and fuzzy logic with artificial neural networks (neuro-fuzzy systems)—in numerical form, and to apply the methods in several different real world
problems.<br><br>In all of the tests performed the neuro-fuzzy systems were the most efficient (a measure of the systems’ accuracy compared to the time and
amount of data required to produce an output)—two times more efficient than Boolean logic and nearly eight times more efficient than analog control—as well
as being the most accurate in both the optical character recognition (OCR) tests and the process control experiments—96.528% accurate in the OCR tests
compared to the analog and Boolean logic results of 4.861% and 43.056% accuracy respectively. The analog control required the greatest amount of data to
produce an output in every test (infinite in wave recreation and 22.930% greater than the average in logic comparison).<br><br>From these and other
experimental data, the conclusion was derived that neuro-fuzzy systems are both the most accurate and efficient control systems available to control such
processes. This experiment can benefit countless industries; it demonstrated that neuro-fuzzy systems can more accurately and efficiently control any situation
where control processes are used—such as in air conditioning, anti-lock brake systems, chemical reactions, speed controllers, et cetera.
2003 - CS029
BRUTE-FORCE ANALYSIS OF PATTERNS FOR A TWO-DIMENSIONAL CONWAYNIAN GAME OF LIFE
Travis C. Johnson
Sunnyside Senior High School, Sunnyside, WA
Objective: The purpose of this experiment was to gather the anomalies of an n-bit pattern set in the Game of Life. Data gathered was the number of
generations before stability was reached, starting and ending tallies, and the stability period length.<br><br>Methods: For this experiment, code to test the
patterns was written, tested, and run. The code works by loading in a pattern, letting it run for one generation, looking for repetition, stopping if a loop is
detected, and continuing until either 250 generations has elapsed or until stability has occured. Once a pattern is done, the next pattern is loaded and these
steps are repeated.<br><br>Results: My project found that most patterns stabilize quickly, don't change after becoming stable, have a start tally of six, and an
end tally of zero. The results show that there is little correlation between any dependent variables.<br><br>Conclusion: Three of my hypotheses were false,
while one was true. The correlation data were particularly interesting; It showed that there was little correlation, but yet the patterns happened due to simple
rules. This implies that complexity can arise from very simple rules.
2003 - CS024
DATA MINING NETWORK TRAFFIC WITH NEURAL NETWORKS
Dmitry Kashlev
Bronx High School of Science, Bronx, NY, United States
Accurate detection of intruder activity for protection of sensitive information is crucial in many aspects of computer transactions and security. Neural Networks
can be adopted to detect intrusive activity. Intrusion Detection Systems used today are not perfect. They generate many false alarms and miss intrusions. It is
important to find a way to reduce the number of these errors. The goal of this project was to determine the best configuration of the neural network to enable it
to detect anomalies in Internet traffic accurately. A neural network simulator was used to imitate the real neural network in intrusion detection systems. The data
used was TCP dump data that was extracted from the simulated Air Force network and fed into the neural network simulator. Before data was fed, four
parameters of the neural network were configured: linear gain factor, number of iterations, number of hidden layer units, and the number of records used. The
neural network was trained based on labeled training data and then tested to analyze the unlabeled data. A number of experiments were run on different values
of every parameter while keeping values of other three parameters constant. The response variable was the accuracy of the neural network at detecting
anomalies. The values of each parameter at which the neural network was most accurate were recorded. Results of this study help optimize the neural network
configuration to increase the efficiency of intrusion detection. This type of system can be deployed to improve current Intrusion Detection Systems detection
coverage.
Awards won at the 2003 ISEF
Award of $500 - American Association for Artificial Intelligence
2003 - CS011
INTERNET EFFICIENCY, CAN IT BE BETTER?
Christopher Daniel Kilpatrick
Terrebonne High School, Houma, Louisiana, United States of America
The purpose of this project is to determine if changing the phone jack through which a computer is connecting will alter its connection speed. It is believed by
this researcher that by alternating the phone jack to which the computer is connected, the speed will also change, thereby enabling the user to choose a faster
connection. For this project a laptop computer was used to measure connection speed through various phone jacks by the use of 3 Internet Service Providers
(ISPs) and software designed to measure connection speed by measuring bandwidth. Multiple tests were run on each ISP. Results showed a slight variance in
speed on the inside phone jacks, but a significant increase in connection speed from the telephone network interface (external telephone connection) was
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
seen. These results led to the conclusion that while it may not be possible to run a computer directly from the interface box, a direct line could be run from the
box to the room where the computer is to boost the connection speed. This procedure would allow the user to obtain the fastest dial-up connection possible
through ordinary phone lines in his or her home.
2003 - CS003
KEYBOARD AND SOFTWARE FOR IMPROVED ACCESSIBILITY
Matthew P. Klaber
Mankato West High School, Mankato, MN, United States
Individuals with fine motor skill impairments are often left out of the technological revolution. This comes from a lack of products available that allow for easy
and efficient data entry into a computer. Although some devices claim to live up to this standard, they are cumbersome and expensive. They also require much
more energy from the user than most are able to give.<br><br>It was my goal to modify a traditional keyboard by simplifying it for use by an individual with fine
motor skill impairments. By providing larger, but few buttons, a keyboard was designed that will accommodate the above stated concerns.<br><br>To
accomplish this goal, the required hardware was constructed. Using a generic USB controller, ASCII keyboard controller and modified buttons, a fully
functioning adapted keyboard was produced. Following the design of the hardware, a software driver was designed and written to interpret the keyboard's
unique signals.<br><br>The completed device and software allows for people with physical limitations to easily access the world of computer technology. The
total cost of all materials for this device was less than fifty dollars. Thus, the goals of the project were met. The device is more efficient, easy and economical
than any device available.<br><br>
2003 - CS032
TINDERBOX: AN EXPLORATION IN GEOPOLITICAL MODELING, ABSTRACTION, AND STATISTICAL INFERENCE
Bradley Jay Klingenberg
Greeley Central, Greeley, CO, USA
The practice of developing (albeit imperfect) models is prevalent in a range of scientific disciplines. From end-behavior models in calculus, to Elwood machines
and gas behavior models in physics, to migratory models in biology and zoology, such implicitly limited models are invaluable, and credit among them many
advances of scientific knowledge. Thus, it is not unreasonable to develop models of aggregate human behavior consistent with the principle of ceteris paribus.
<br><br>TinderBox seeks to describe, model and predict the potential for geopolitical conflict between India and Pakistan in the theater of Kashmir. The
behavior and decision-making of both India and Pakistan are modeled intimately, using the latest and best intelligence from and about their respective
militaries. Extensive empirical evidence from past conflicts, historical patterns, and general geopolitical and military theory, has been incorporated at every level
of the modeling system. A top-down Pavlovian inspired artificial intelligence algorithm capable of neural retention drives modeling software, written in a mixture
of C++ and assembly.<br><br>A novel statistical method that allows the inference of both scientific and policy-level conclusions was developed and employed.
Named coefficient ratio threshold analysis (CRTA), it is ideally suited to be coupled with a geopolitical modeling system such as TinderBox. Using CRTA
methodology, it was established that the lack of permissive action link technology in active and coordinated use among the nuclear arsenals of India and
Pakistan increase the likelihood of nuclear engagement by fully 40%. Such conclusions are not only of academic interest, but have far-reaching policy
ramifications.
2003 - CS044
NON-DETERMINISTIC POLYNOMIAL TIME COMPLETE CALCULATIONS USING SYNTHETIC BUBBLE EMULATION
Daniel Lennard Kluesing
Leigh High Scholl, San Jose California, USA
Non-deterministic Turing machines are capable of solving Non-deterministic Polynomial time Complete (NP-Complete) problems in polynomial time. Soap
bubbles blown onto a sheet of glass with nodes representing cites constitutes a non-deterministic Turing machine capable of solving the Euclidean Traveling
Salesman Problem. Soap bubbles are easily described by deterministic equations. The goal of this experiment is to produce an algorithm capable of emulating
the TSP solving properties of soap bubbles in polynomial time on a deterministic Turing machine. Given a planer graph G(V) containing n cities, a Voronoi
diagram is generated. Voronoi edge lengths are used to identify city poor regions of the graph and place bubble centers. Cities are associated with a minimum
of one bubble and a maximum of three in compliance with Plateau's laws, and bubble arcs are flattened to lines giving a graph G1(V,E) with all vertices lying at
the intersection of a minimum of two edges. A start city is defined and a Hamiltonian cycle traced in a clockwise direction. The algorithm correctly predicts the
location of bubbles and the cities bound to each bubble. The predicted bubbles are verifiable on a physical soap bubble array as the bubble configuration
needed to solve a given graph, and the algorithm attempts to extract the shortest path from the bubble configuration. The algorithm is guaranteed to always
produce a Hamiltonian cycle for a subset of G in an amount of time bounded by a fourth order polynomial and the runtime of the algorithm does not appear to
be influenced by the arrangement of the cities on the graph. Due to weak edge detection methods, the algorithm excludes fringe cities from the Hamiltonian
cycle for graphs of large n during the tracing phase. The number of cities included in the Hamiltonian cycle is based on the arrangement of cities in the graph.
The author is working to implement more robust edge detection methods capable of including fringe cities.
Awards won at the 2003 ISEF
Third Award of $250 - Bently Nevada, a GE Power Systems Company
First Award of $3,000 - Computer Science - Presented by Intel Foundation
2003 - CS002
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
UNBREAKABLE CHAOTIC ENCRYPTION ALGORITHM
Andrew Michael Korth
Morris Area High School, Morris MN, USA
The goal of this project was to develop an algorithm to encrypt files in a way that cannot be broken. <br><br> Encryption by substitution applies a key to
encrypt a file. Cracking substitution ciphers is accomplished by searching for regularities or patterns in an encrypted file. Longer and more irregular keys allow
for more complicated patterns, making the cracking of an encrypted file more difficult. <br><br> In order to make a theoretically unbreakable cipher, I generated
a key of indefinite length using a chaotic equation. Using chaos found in the logistic map, I was able to reliably generate a non-repeating key as long as the
document to encrypt. Because the key is based on chaos, the key itself is entirely irregular and unpredictable. Using this key, I am able to encrypt (and decrypt)
any sort of file.<br><br>
2003 - CS030
AN ULTRA HIGH SPEED METHOD FOR THE MULTIPLE COMPARISON OF 3D STRUCTURES AND ITS APPLICATION TO 3D PROTEIN STRUCTURE
PATTERN RECOGNITION
Kevin Chester Kuo
Stuyvesant High School, New York NY, USA
In today’s “information age,” the rapid influx of data has resulted in a high demand for efficient methods of managing and interpreting data. The primary
objective of this project was to develop an ultra high-speed algorithm for the recognition of 3D geometric patterns. The methodology developed has numerous
applications in computational biology as well as in artificial intelligence systems that involve 3D shape recognition. In this project the test data was 3D protein
atomic coordinates. <br><br>The methodology presented in this work can be broken down into two procedures. The first procedure involved the creation of a
novel algorithm for multiple string comparison. This algorithm was able to achieve high efficiency of comparison because it utilized high speed sorting
algorithms. The second procedure involved converting structural parameters (coordinates) to string data so that the novel algorithm could be applied to the
problem of 3D multiple protein structure comparison. <br><br>This novel method was implemented and tested on a Pentium 4 1.8 ghz PC. It is by one count,
times faster than the conventional dynamic programming method run on a supercomputer. The complexity of this algorithm is O( ) compared to the
conventional O( ), where n is the number of amino acids in each protein, and m is the total number of proteins compared. Various biologists tested the
efficiency and accuracy of this method using programs based on conventional algorithms. In addition this method can be applied to another major problem in
computational biology, that is, the comparison of genomes of different organisms.<br><br>
Awards won at the 2003 ISEF
Third Award of $350 - IEEE Computer Society
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
DuPont's Center for Collaborative Research and Education, Office of Education recognizes individual and team winners in the categories that best exemplify
DuPont's business-related interests: Biology, Chemistry, Engineering/Physics, Earth/Environmental Science and/or Mathematics. Each project is recognized
with a Primary Award of $1,000. - National Aeronautics and Space Administration
2003 - CS048
SIMULATED CHAOS
Richard Taylor Kuykendall
Moorefield Hish School, Moorefield, WV, USA
Problem:<br><br> Will a pseudo-random number generator created in Macromedia’s Flash web authoring program be capable of producing a 10% or less
amount of variance from that of a set of real dice, which is supposedly completely random in nature? <br><br>Hypothesis<br><br> I have deducted from my
research that using Macromedia’s Flash web authoring program, it should be possible to create a program that will run within a ten percent variance of a set of
true random numbers, to be determined by rolling a set of dice. <br><br>Procedure<br><br>My procedure will be designed in a way that will allow me to test
the three random events I wish to examine. These are a mathematical distribution, a real set of dice, and the program that I create.<br><br>Data<br>
<br>Average Deviance between Dice/Simulated Dice: .75% Difference<br><br>Applications<br><br>The data that my project produced could be used to help
determine the security of current methods of cryptography, computer games, web applications, and mathematical uses. It also may stimulate an improvement
in mathematical functions that produce random numbers.<br><br>
2003 - CS306
EXPRESSION PROFILING: CAN A COMPUTER PROGRAM RECOGNIZE FACIAL EXPRESSION?
Juliette Michele Lapeyrouse, Philip Damian deMahy
Catholic High School, New Iberia, Louisiana, United States of America
The purpose of this project was to develop a computer program capable of recognizing human facial expression. Four testable human facial expressions were
identified: happy, sad, angry, and surprise. The expressions were defined by their linear variations in six dimensions from a neutral face, which were converted
into a series of mathematical proportions. It was hypothesized that the program would recognize each expression to produce an overall success rate of over
75%.<br><br>A device was constructed to stabilize the heads of subjects, which allowed the subjects to be photographed for data collection. Measurement
techniques were also developed. Sixty subjects were photographed executing the four expressions along with a neutral (baseline) face. Six measurements
were taken from each face (e.g., distance between the eyes). Each measurement was divided by its corresponding baseline measurement to calculate
proportions. From this data, averages and standard deviations were calculated. A text file and computer program were then created. The program was tested
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
for accuracy on both the original set of subjects and a new set of ten subjects.<br><br>The program had an overall accuracy of 84% on the original set of data
and 83% on the second set of subjects. In both studies, surprise and anger were most successfully recognized and males and females had a relatively equal
number of correctly identified expressions.<br><br>In conclusion, it was found that six measurements taken from a face provide enough vital information to
properly identify the projected expression, and continuation of the research is anticipated. <br><br>
2003 - CS001
HEATSINK, WATER, OR DIELECTRIC FLUID COOLERS: IMPROVING CPU PC PERFORMANCE
Richard Sam Longenecker III
DeSoto County High School Arcadia, Florida USA
PURPOSE: Discover the effect of temperature on CPU performance by using three cooling methods. The researcher investigated which cooling method would
lower the temperature of a central processing unit (CPU) most to increase processing speed. The researcher used a conventional heatsink and fan as control,
a water cooling unit with a radiator, and a dielectric cooling fluid (Fluorinert liquid manufactured by 3M).<br><br>METHOD: Researcher conducted nine trials,
three for each of the cooling methods. Each trial used the Fuzzy Logic 4 program for measurement criteria. This Windows utility program permitted automatic
adjustment of the Front Side Bus frequency to increase the CPU speed. The system unit automatically shut down after three minutes to avoid unnecessary
harm to the hardware.<br><br>RESULTS: Researcher recorded, every 30 seconds, (1) processor speed in GHz, (2) voltage, (3) temperature in Celsius
degrees, and (4) Front Side Bus frequency. Dielectric fluid was a 6.11 degree Celsius improvement over the conventional method and 1.16 degree Celsius
improvement over the water cooled method. The difference between the water and dielectric are not far apart. The conventional heatsink and fan was not as
effective. The water cooling unit was continuously cooled by a radiator, but not the dielectric fluid; the researcher theorizes that if the dielectric fluid was
continuously cooled lower than ambient, the dielectric fluid would have been more effective. HYPOTHESIS RESULTS: Researchers hypothesis was supported
by the results of the experiments. Performance of the personal computer was most improved by the ambient dielectric fluid.<br><br><br><br><br><br><br>
2003 - CS304
VISUALIZATION OF TOOLS FOR THE HEARING IMPAIRED
Satish Mahalingam, Lee Perry, Frank Block
Little Rock Central High School, Little Rock, AR, USA
Communication for the hearing impaired is filled with challenges. Alternative methods of communication were developed to deal with problems. The most
widely used method of alternative communication is American Sign Language (ASL). ASL is ranked fourth in number of speakers among languages in the
United States; yet most people are not fluent.<br><br> This paper describes simple, practical hardware and software tools that help the hearing impaired with
their daily lives. The hardware includes a glove that displays the American Manual Alphabet, a subset of the American Sign Language, on a small portable
screen or computer monitor, a text to sign software program, a sign to speech program, and virtual sign language.<br><br> The products were tested for
efficiency, accuracy, and integrity. All the tests passed, and all the twenty-six (26) alphabet characters were exercised on each of the systems. These tests
proved the products successful in aiding the hearing impaired.<br><br>
Awards won at the 2003 ISEF
Second Award of $1,500 - Team Projects - Presented by Science News
Second Award of $450 - Sigma Xi, The Scientific Research Society
2003 - CS309
TUTORIAL FOR SPACE AND TIME
Luis Guillermo Marin, Marco Vinicio Espinoza, Julio Cesar Santamaria
Colegio Cientifico Costarricense, Santa Clara, San Carlos, Alajuela, Costa Rica
Problem: The topic’s understanding requieres a great knowledges about it if you want search it in a book, besides it’s boring for people.<br><br>Hypothesis: A
program will help people to understand this topic in an easy and attractive way without the monotony of look for the information in a book.<br><br>Objectives:
Explain the advanced theorys about the space like antimateria, dark holes, relativity, etc. Teach to people about possibles applications for this theorys in the
future.<br><br>Methodology: The concepts were determined looking for information in modern physics books, web pages and people with great knowledges
about physics area, after the literature revision was created the program’s data base. When the data base was done, were created the program’s animations,
after that the software’s ambient and each one of the program’s forms were programmed. At last we did a revision to the program so that find mistakes and
repair one by one. Algorithms were designed using the structured programming rules, that is using selectives and repetitives structures, and others to bifurcate
the program’s sequence. <br><br>
2003 - CS010
COMPUTATION WITH ONE-DIMENSIONAL CELLULAR AUTOMATA
Eric Andrew McNeill
Farmington High School, Farmington, Missouri, USA
A cellular automaton consists of an array of an array of "cells" which can take on a specified number of values, along with a rule for updating the value of each
cell based on its value and the values of its neighbors. Certain cellular automata have been shown to be computationally universal--capable of carrying out
arbitrarily complex computations given appropriate initial conditions of the array. Among these rules is the two-value, nearest-neighbor cellular automaton
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
numbered as Rule 110. I hypothesized that it was possible to carry out computations of considerable sophistication using the localized structures of the Rule
110 automaton on a space of less than 1000 cells.<br><br>To attain this end, I created a computer program in the QBasic programming language to
implement a Rule 110 automaton on an array of 640 cells, with pixellated output to the monitor. I used this program to compile a database of the structures
possible in the system, and then to catalog the interactions between structures. Then I began to look for a way to implement a NAND operation using these
structures.<br><br>I was not able to find a combination of structures which can emulate a NAND gate. I have, however, discovered a set of structures which
act like a register for storing positive integer values. Other structures can add to and subtract from the value in the register. The existence of these
combinations of structures show that Rule 110 has significant symbolic computational capacity over a small space of cells.
2003 - CS022
COMPUTER GENERATED HYDROCARBON ANALYSIS
Christopher John Mitchell
Felix Varela Senior High, Miami, Florida, United States
To determine if a computer program can be created that would produce, test, and find the most energy efficient molecule from all possible hydrocarbons. It was
hypothesized a program could be created.<br><br> This goal was accomplished by using the Java programming language. Within the program, two data
structures were created. One structure is to handle data for input and output. The second performs the analysis. Methods were established for automatic
generation of the molecules and to perform energy calculations. The program can generate and test all combinations of hydrocarbon molecules and determine
energy released from the combustion reaction of each. Molecules are ranked based on energy released per mole to determine the most efficient.<br><br>
Tests were performed ten times to ensure accurate results. The tests included running the automatic testing sequence with 22 carbons to simulate all
transportation fuels and running the manual testing sequence to test the ability to analyze molecules with rings in them. Analyzing results, the program did
demonstrate it was capable of automatically and accurately creating hydrocarbons in the chains (with single, double, triple bonds) it was setup to handle. It also
correctly calculates the energy released from each molecule upon combustion. The results of the manual testing are also accurate. Therefore, the program
fulfilled the goals of the problem and correctly proved the hypothesis. The program can be further improved with the addition of an improved interface and the
addition of the automated creation support for ring molecules and chains with branches.
Awards won at the 2003 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
2003 - CS015
ARTIFICIAL CREATION OF MUSIC USING MUSIC THEORY
Aadhar Mittal
Montfort School Ashok Vihar Phase - I , Delhi , India
Human creativity is the least understood aspect of human behavior. Human creativity is often applied like human intelligence, which produces results of low
quality. This project generates computerized music through different procedures and determines their quality.The music theory concepts were learned to gain
an insight into the making of music. Computerized methods of four kinds: genetic programming, creativity theories using Markov's chains; and algorithmic
approach based on Zipf’s Law, Linden Mayer’s method were designed. These were implemented through programming to generate music, experimenting at the
same time with different parameters.The results were analyzed by an artificial analytic system. The music thus generated was listened to. The music is of a
systematic flow, rather than having “inspired” elements. Further investigations as to whether computers can become music “composers” rather than generators
can be undertaken based on these results. Also creating along with a Genre Specification could be undertaken.
Awards won at the 2003 ISEF
Summer internship - Agilent Technologies
2003 - CS057
MODELING PHYSICAL SYSTEMS USING FLUX CORRECTED TRANSPORT ON A FULLY THREADED TREE
Jonathan Albert Mizrahi
Thomas Jefferson High School for Science and Technology, Alexandria, VA, USA
This project consisted of writing code for FTTFCT, a computational method for computing flux-corrected transport (FCT) on a fully threaded tree (FTT). It
simulates fluid flow with sharp discontinuities in less time and with less overhead memory requirements than other flow solvers, while preserving accuracy. FTT
is a mesh which can be adaptively refined, and FCT is a computational algorithm which is designed to be extremely accurate at shock fronts. FTTFCT will be
invaluable in the simulation of any physical system involving steep discontinuities, such as explosions. A simple test problem is presented with timing and
accuracy comparisons, to demonstrate the advantages of using FTTFCT.
Awards won at the 2003 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
Scholarship in the amount of $8,000 - U.S. Navy & Marine Corps
2003 - CS308
NEED FOR SPEED: COMPARING METHODS OF OVERCLOCKING ON COMPUTER STABILITY
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
An Phuoc Nguyen, Luke Barrett Garfield, Rachel Louise Pollack
Classen School of Advanced Studies, Oklahoma City, Oklahoma, USA
The purpose of this project was to determine whether it is more advantageous to adjust the FSB speed or the Multiplier clock rate when overclocking. This
information could benefit overclockers who want to get the maximum performance out of their computers. It was hypothesized that it is better to adjust the
Multiplier rate than the FSB speed. First, a computer was built by connecting the hardware and installing an operating system. The processor used was an
AMD K6-2 266MHz Socket 7. The motherboard used was a Gigabyte GA-5AX. Next, settings in BIOS were ajusted so that the computer would shut down
should CPU core temperature reach 60° Celsius. Then the computer was overclocked by using the dip switches to change the Multiplier and the jumpers to
change the FSB. Five different combinations of FSB and Multiplier to processor speeds of about 300 MHz were used. The combinations with higher FSB
speeds and lower Multiplier rates exceeded the temperature limit of 60° C, and the computer would not start. From these results, it can be concluded that
adjusting the Multiplier more than the FSB produces a lower temperature and therefore more stable computer-operating environment.
2003 - CS054
TRAINING NEURAL NETWORKS WITH GENETIC ALGORITHMS
Nhan Duy Nguyen
Southside High School, Greenville, South Carolina
With the development of artificial neural networks, procedures for training these networks, learning algorithms, had to be developed. This project looks at an
attempt to bypass developing a specific learning algorithm by using a genetic algorithm to train neural networks. Genetic algorithms (GAs) are computational
models based on the biological concept of natural selection that attempt to “evolve” or search for an optimal genome – in this case the weights for the neural
network. The most common network structure in use today is a multilayer feed-forward network running under the backpropagation algorithm. The purpose is to
consider how the performance of a genetic algorithm in training a neural network compares to backpropagation, a more canonical learning algorithm.<br>
<br>The procedures consisted of developing computer software that tested neural network performance of traditional learning algorithms compared to the GA
approach. The software was configurable to test feed-forward networks with a wide selection of data. The tests were run using both the traditional
backpropagation algorithm and GA learning. The program tested the neural network’s ability to generalize the problem and the time required to reach an
acceptable error level under both algorithms. The basic test setups used tested applications of function generalization and image recognition.<br><br>Results
showed that the GA learning approach was significantly slower but slightly more accurate. This is likely due to the fact that backpropagation tends to become
stuck in local minima of the error function, while the GA is a global search. The fact that a GA was able to give acceptable performance is significant in that if a
new neural network setup were produced without a learning algorithm developed, a genetic algorithm could be used as a learning algorithm for that network.
Awards won at the 2003 ISEF
Award of $500 - American Association for Artificial Intelligence
$10,000 per year scholarships, renewable annually - Florida Institute of Technology
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2003 - CS064
COMPUTERS: DO YOU HAVE THE NEED FOR SPEED?
John Frankling Noehl
Hahnville High, Boutte, Louisiana, USA
Often today Companies such as Dell™, Gateway®, Hewlett-Packard© often use the common person’s computer illiteracy to that companies advantage to
make the biggest buck for the least work. To do this they constantly make claims of increased clock speeds of a various components with in a computer such
as: CPU, FSB, and IDE Bus speed. To make the public more aware of how the components of a computer are inner related I did my project, to give the
common person a sense of “what to look for” in a computer when purchasing a computer or making upgrades to a current computer.<br><br> To gather results
for my experiment as to which device with in the computer I individually tested the various devices of the computer at various clock speeds and timings. This
was done is such steps:<br><br>1. Boot computer 2. Load Windows© 3. Run test in testing browser (PcMark 2002©) 4. Shut Down Computer 5. During POST
hold down the DEL key to bring up the BIOS, make changes to options such as FSB, or Cache 6. Save settings and Exit Bios 7. Load Windows© 8. Run test
with the new settings to gain data for the new settings 9.Save results 10. Shut down and change settings in BIOS back to original configuration, Make change
and repeat up-to step 6 until finished with all processes. <br><br> In Conclusion I found that a sum of various devices truly affect the overall speed of the
computer and are inter related and dependent on one another. But from a percentage standpoint the CPU was the largest factor, yet the FSB and many other
factors played just as vital roles. <br><br> <br><br>
2003 - CS311
GALACTIC DOMINATION
Giuseppe William Palazzolo, Rui Ma
Honourable Vincent Massey, Windsor Onatrio, Canada
The Galactic Domination project has evolved tremendously over the last few months. The program started as a database managing program. It has now been
altered to provide a more recreational experience. The focus of Galactic Domination was to create a user-friendly environment in which people of all ages can
enjoy a “friendly” game of conquest for the galaxy. Although, the new version relates to the database project in the use of record tracking, it is now used for
user information. The goals of the Galactic Domination series of programs are to provide opportunities to experiment with new and innovative methods in order
to create an enjoyable online gaming environment for all using a visually stimulating graphical interface. <br><br>In the process of completing this program,
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
many procedures were used. First, we created many graphical images and interfaces using advanced imaging editors such as 3DS max, Adobe Photoshop
7.0, FlashMX 4.0 and Corel Rave 1.0. Next, there was the actual programming. The Object Oriented Turing language was used. The gaming interface was
created, and then the database was used to store the user information. Although the game models the board game Risk, many rules have been modified along
with the story line or setting in space. Lastly, we made the game network enabled. Therefore, it can be concluded that Galactic Domination integrates
challenging and authentic components that make it enjoyable and give it a real commercial value.
Awards won at the 2003 ISEF
Fourth Award of $500 - Team Projects - Presented by Science News
2003 - CS040
MAKING A PICTURE WORTH MORE THAN A THOUSAND WORDS
Daniel Jonathan Parrott
Bartlesville Mid-High, Bartlesville, Oklahoma, USA
The goal of this project is to demonstrate the possibility of concealing information inside pictures. Because computers represent both pictures and the
information to hide as numbers, these can be combined. This allows information to be hidden inside a picture by making small differences in the color values
correspond to the hidden information.<br><br>To demonstrate this idea, a computer program was written. The user specifies a picture to use along with the
information he or she wishes to hide. That information is read byte-by-byte, with each byte corresponding to a difference in a color value in the picture. Next,
the program saves these modifications as a separate image to the desired location on the hard drive.<br><br>To retrieve the hidden information, the program
compares the original and modified pictures. The program uses the difference in color values to reconstruct the hidden information. This is why the technique
used offers such powerful encryption strength. Only when given the original picture and its counterpart will anyone be able to reacquire the information.<br>
<br>Executable programs, documents, drawings, photos, whole books, and maps can be hidden inside pictures. As just one example, a digital copy of the novel
"Great Expectations" was concealed within an ordinary picture. As a result, this project successfully demonstrates that information can be concealed within
pictures.
Awards won at the 2003 ISEF
Honorable Mention Award Certificates for International students and students under the age of 16 - National Aeronautics and Space Administration
Honorable Mention of $100 - U.S. Coast Guard Research and Development
2003 - CS049
DYNAMICALLY DISPLAYING DATA: PRESENTING INFORMATION BASED UPON A USER'S FACIAL REACTIONS
Saagar Bhupendrakumar Patel
Celebration School; Celebration, FL 34747; USA
In modern computer systems, one of the most prevalent software problems is the lack of advanced user input mechanisms. For years, the keyboard and mouse
have been the default and only supported standard of user input. With more powerful and capable computers, opportunities for smarter interfaces have become
more plentiful, but remain unused. The purpose of this project is to examine and implement new user input mechanisms based upon recognition of facial
features.<br><br> The experiment lies in the comparison of different algorithms and methods of facial detection. After multiple methods of detection are
programmed, a short test sequence is run. The results of each test are then compiled, to determine the accuracy and effectiveness of each algorithm. The
basic detection method was an edge-finding procedure, and the various permutations were run with different parameters for the edge-finding method.<br><br>
After experimentation, the most effective implementation was the eight-directional edge finding method; by using a simple pixel color difference test, the
software could assess the user?s facial expression accurately almost 90% of the time.<br><br> The final software can be effectively applied to real-world
environments in a low-budget environment. The use of visual detection is a very useful and interesting method of data manipulation, and provides the user with
a new and enhanced method of giving the computer feedback. Further experimentation would pursue alternate means of input, such as eye gaze detection.
Awards won at the 2003 ISEF
Second Award of $500 - Eastman Kodak Company
2003 - CS060
AUTOMATIC PLANKTON SEARCHING SYSTEM FOR MICROSCOPE
Chula Pittayapinun
Hatyai Wittayalai School, Hat Yai, Songkla, Thailand
Plankton is an important part of the food chain in the eco-system. Thus finding out number of plankton population is very significant in ecology research. One
easy way to determine the population number is by using a microscope to help count the number of plankton in a unit of water. However, by doing so it creates
inconvenience when handling with multiple samples. We propose the new image processing method to classify types and count the number of target planktons.
This work can provide real-time reading by using a small camcorder, or other that have similar function, connected to a microscope. The proposed method
focuses on solving the three fundamental image-processing problems: 1) detecting out of focus images, 2) viewing a transformable object, and 3) problem with
overlapping images. We use several-advanced image processing techniques to solve these problems and adopting a comparison technique based on special
static property of each object in images to increase the accuracy of plankton detection in our work.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2003 - CS009
BIT LOSS REALLY BYTES!
Zachary Aaron Radke
Columbia High School, Lake City Florida, United States of America
The intention of this experiment is to find what CD standard (-R or -RW) and file system is more resilient to degradation by heating and exposure to UV
radiation. The CD-RW is expected to be more resistant to degradation than CD-R because of the burning method and file system used. A packet writing
method is used for the CD-RW and Disc-at-once method for CD-R. The CD-RW is formatted using the CDUDFRW file system. The CDUDFRW file system is
made with redundancy built in and is lenient on file name length, characters used in file name, and file attributes. To start, record six CD-Rs and six CD-RWs
with four data types: text (.rtf), audio (.wav), image (.tif), program (.exe). The UV degradation test involves one CD-R and one CD-RW on a white surface for
four hours in UV-6 sunlight. For heat degredation testing, place the CD’s in 212•F water for two hours, both tests are replicated twice. Data degradation will be
determined by: a unique color count in the image and unique samples in the audio file. The RTF file is analyzed for the number of lines of formatting code, the
program will be installed and its error messages encountered recorded. The results are compared to the controls’ corresponding information. The CD-RW
performed better under UV rays. The CD-R was least affected by heat. The CD-Rs high tolerance to heat could be caused by the higher recording temperature.
All file types were damaged, only the program was inoperable.
2003 - CS050
ARTIFICIAL INTELLIGENCE: A MODEL OF PERCEPTION BY THE VISUAL SYSTEM
Kimberly Elise Reinhold
St. Joseph High School, Hilo, Hawaii, United States of America
The human brain presents scientists with an ideal guide to practical and efficient image processing techniques. Computer simulations that model this biological
system provide insights into the neurophysiologic mechanisms of pattern recognition and learning, suggest methods for the investigation of artificial intelligence,
and direct the design of useful applications. In this project, programs were written in the C language to pursue experimental and conceptual advances in
support of these three goals. The multilayered and hierarchical organization of the human visual system was used as a basis for the development of complex
artificial neural networks that could perform applications requiring higher level cognitive functions -- classification of blood and bone marrow cells by
morphology. The four main aspects of visual perception (size, form, position, and color) were utilized as discriminants in these computer models. <br>
<br>Training of the networks with digitized cell images using a self-supervised learning algorithm was rapid and efficient (25-57 runs). Recognition of unfamiliar
cells was highly accurate (90-100%), sensitive (80-100%), and specific (98-100%) over a broad range of starting conditions for neural thresholds and weights.
Higher level associative processes allowed correlation of cell types with various diseases. The abilities of the systems surpassed those of current hematology
analyzers. <br><br>Techniques and principles developed in these experiments are scalable and extendable to a large set of pattern recognition problems. In
addition, the networks can adapt, generalize, and find multiple solutions to the problem of cell classification, thus displaying hallmark attributes of complex brain
function and intelligence.
Awards won at the 2003 ISEF
Award of $2,500 and a high-performance computer - Intel Foundation Women in Engineering and Computational Sciences Award
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
Second Award of $150 - Patent and Trademark Office / U.S.Department of Commerce / Patent and Trademark Office Society
Award of $500 - Schlumberger Excellence in Educational Development
2003 - CS046
MR. FATTY A KNOWLEDGE-BASED OBESITY CURING AND MONITORING SYSTEM USING CONSTITUTION
SeungKyun Ryu, Jangyoungsil Science High School
Busan, South Korea
Nowadays many people in the world suffer from obesity, which causes serious social problems as well as health problems. Obesity can be cured by three
methods: ‘dietary therapy,’ ‘exercise therapy,’ and ‘behavior therapy.’ In order to perform all these methods, specialized and detailed technical knowledge and
continuous application of the methods are necessary. Unfortunately, most people’s lack of medical knowledge and patience impedes their ability to cure
obesity.<br><br>In this paper, a knowledge-based obesity curing and monitoring system using constitutional medicine, called ‘Mr. Fatty’ is introduced. ‘Mr.
Fatty’ helps people cure obesity scientifically, automatically, and continuously. It provides the three curing methods (dietary therapy, physical activity, behavior
therapy), which enable people to cure obesity in various ways. It is a knowledge-based system. It takes some inputs such as physical body data and customer
goals, and gives advice of three kinds (diet, exercise, behavior), and finally gets some feedback such as output data or changed goals from customers and
reflects the feedback to the system. It also uses Oriental Medical Science to give each customer a specialized obesity monitor/cure. Based on ‘Sa-sang
Constitutional Medicine’ remedy, ‘Mr. Fatty’ analyzes the user’s constitution scientifically and then helps the user to reduce obesity, as well as monitoring it after
treatment.<br><br>
Awards won at the 2003 ISEF
Honorable Mention Award - Endocrine Society
2003 - CS017
CAN NATURAL ARACHNID MOVEMENT BE USED TO MODEL A ROBOT?
Rudy Daniel Sandoval
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Bartlesville High School, Bartlesville, Oklahoma, United States of America
The purpose of this experiment was to see if real life arachnid movement could be simulated in a 3-D computer environment.<br><br> <br><br> The project
began by reading chapters from a C++ programming book to learn the computer language which tested my hypothesis.<br><br> <br><br> The next step was
the filming of a tarantula. Three cameras, one on each side and one above the spider recorded his walk across the floor. Each film was appointed a starting
time to keep all films in sync with each other. Each frame was digitized by storing the joint, leg, and body points into an array. Using these points, software was
developed to animate the robotic spider in a 3-D computer virtual world.<br><br> <br><br> After the creation of the model, rotation was desired to control the
direction of the model. A virtual sphere was created around the model in order to rotate it. <br><br> <br><br> The final step was to identify mathematical
equations which duplicated the spider’s movements. These equations were then applied in the program and an analytical spider was created.<br><br>
Awards won at the 2003 ISEF
First Award of $1,000 - Association for Computing Machinery
Scholarship award of $8,000 per year for four years. - Hiram College
Tuition scholarship of $5,000 per year for 4 years for a total value of $20,000 - Indiana University
2003 - CS045
AN EFFECTIVE DATA DISTRIBUTION NERWORK SYSTEM FOR THE MUG GAME
Seok Young Song
Gyeongnam Science High School, Jinju-si, Gyeongsangnam-do, Republic of Korea
Due to the rapid advancement of the network, we can use more complex and sohpisticated services via network. The "Mud Game" is one of the services using
the network. Until now, general mud games were based on only text, and the users exchanged text data among each other, so it was not difficult to play the
mud game via slow network. But, as the quality of service became more advanced, the graphical interfaces are available to mud games, and the number of
users are increased. Because of the frequent exchanges of large data and the huge number of users, the server cannot process the data within expected time,
consequently, the "Lack" phenomenon is frequently originated. To address this problem, I devised a new network structure called MN by modifying existing one
called DOOMVAS. By using parallel link in connecting clients, the MN can solve the "Lack" phenomenon. This paper describes the structure of MN and
simulation results of the experiments. In addition to that, this paper propose an algorithm that can make the MN effective. Furthermore, to address security
problems caused by linking one client to another in parallel, a data-type is defined to separate protection level. And to examine the effectiveness of MN, this
paper compares the MN with the DOOMVAS using simulation.
2003 - CS021
BUILDING A BETTER B.E.A.R. (BOLTON'S EARLY ACCESS REGULATOR)
Branson Burnett Sparks
Bolton High School, Alexandria, Louisiana, USA
This project tests the hypothesis that the entry procedure at Bolton High School can be streamlined by automation and enhanced by utilizing custom computer
software and hardware interfaced with common computer and telephone equipment.<br><br> At Bolton, students needing to enter the building before school
had been required to write their name, the time, and the teacher they wished to see. Since the listed names appeared in chronological order, teachers had to
read he entire list to find which students had signed in to see them. Also, student handwriting was often hard to decipher; plus, teachers had to allow time to
visit to the faculty lounge to check the sign-in sheet containing approximately 100 students’ signatures.<br><br> Original software and hardware specific to this
project were written and constructed. From a quantitative standpoint, data on the times required by both new and old systems were gathered and compared.
From a qualitative standpoint, data were gathered from a random and anonymous survey. Both quantitative and qualitative data support the hypothesis.<br>
<br> Other applications of this project include keeping track of people within a given facility, such as a prison, a refugee center, or even a large summer camp. It
could be useful in maintaining an accurate list of students’ names and locations in the event of an accident or disaster in the building. Another application for
student management would be to log the names and times of students who enter a school-sponsored or extracurricular event, such as an athletic game or
dance.<br><br>
Awards won at the 2003 ISEF
Second Awards of $1,500 - U.S. Air Force
2003 - CS042
VLSI DESIGN AUTOMATION
Colin Pearse Sprinkle
Detroit Country Day, Beverly Hills MI, US
As the size and complexity of today’s most modern computer chips increase, new techniques must be developed to effectively design and create Very Large
Scale Integration chips quickly. For this project, a new type of hardware compiler is created. This hardware compiler will read a C++ program, and design a
physical layout of a suitable microprocessor intended for running that specific program. With this new and powerful compiler, it is possible to design anything
from a small adder, to a full-featured microprocessor. Designing large microprocessors, such as the Pentium 4, can require dozens of engineers and months of
time. With the help of this compiler, it is possible for a specialized processor to be designed effectively in a shorter amount of time.
Awards won at the 2003 ISEF
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
IEEE Foundation award on behalf of IEEE Region 2 and the Cleveland Section, an award of $250 - IEEE Foundation
2003 - CS008
THE INTEGRATED GRAPHIC APPLICATION “STAREDITOR”
Sergey Yu. Starkov
Vyatka Humanitary Gymnasia, Kirov, Russia
The purpose of the work is the creation of the program which has the most popular methods of graphic processing and gives the user a wide spectrum of
interface’s adjustments and can translate the main elements of control into various languages. <br><br>The author investigated the historical data, the software
in the Internet and he interrogated the users to acknowledge the actuality of the certain problems.<br><br>The received results confirmed the necessity of the
decision above-listed tasks: the majority of the interrogated users wish to have an opportunity of the program interface’s adjustment which are used by them;
the rates of development of the Internet make the distributors of software in a wide-area network to care of the translation’ support of their programs into the
languages of the users from the other countries; the number the Internet resources grows, as the consequence the demand for software grows for its creation,
in particular on the integrated graphic editors.<br><br>The author investigated all possible(probable) methods of work with graphic files, and has come to a
conclusion, that it is necessary to develop and to build the utility for creation of presentations as exe-files in the program.<br><br>The program "Stareditor"
united the most popular methods of graphic processing in itself. The author created and embedded the system of dynamic adjustment of the interface in the
program. The created utility can produce computer presentations as an exe file or as a screen saver. The system of multi-language support is embedded and
advanced.<br><br>
Awards won at the 2003 ISEF
First Award of $700 - IEEE Computer Society
2003 - CS063
THE ICALI COMMUNITY ACTIVITY PLANNER: DESIGN AND DEVELOPMENT
Michael Robinson Stone
Allegany High School, Cumberland, MD, USA
Purpose: People today lead busy lives - many have complained of feeling "disconnected" from life. The Icali Community Activity Planner will help to solve these
problems by helping people to discover events in their communities, to manage their group commitments, and to control their schedules for maximum
efficiency.<br><br>Engineering Goals: To:<br><br>1. Allow easy access to information about community events.<br><br>2. Allow users to find other users
with similar interests and transportation needs and to facilitate ride-sharing agreements between these users.<br><br>3. Act as a personal scheduler to help
users to keep track of their social commitments.<br><br>4. Act as history tool for use in sociological and anthropological research.<br><br>5. Be available
anywhere, any time, on any reasonable medium.<br><br>Procedures: A software design was created to fulfill the problem's requirements. This design was
implemented using the technologies that were found to be most suitable by a comprehensive technology review. The implementation was then evaluated for
compliance with the problem statement, detailed design, and engineering goals.<br><br>Conclusion: This release fulfills all of the engineering goals of the
project. The software allows quick access to all event information and helps users to discover related events and other users with similar interests. It attends to
transportation needs with support for carpools and maintains the data in an easy-to-access SQL database. The Icali Community Activity Planner is a valuable
aid for managing its user’s busy schedules.<br><br>
2003 - CS016
TEXT EXPLORER
Sangameswaran Tejeshwar Tandon
S.B.O.A Schools and junior college, Chennai 101, TN, India
Purpose of Experiment: The development of Text explorer with<br><br>additional functions started after a survey in which 73% of them wanted an advanced
word processor. The main purpose of this project is to make word processing fast, effecient and usable by the handicapped. Procedure used: This program
was developed in Microsoft Visual Basic The speech DLLs were designed in Visual C++. It has a multiple document interface, i.e more than 100 documents
can be opened<br><br>and edited at the same time ; It contains 25 forms, which is the outer<br><br>structure and 26 modules and 4 class modules.which
contains<br><br>most of the programming.The program has 5 enhanced user controls. <br><br>The whole source code of the program is approx. 11,000
pages<br><br>(ASM Code)Special Functions: Text to Speech, Voice Comand, Voice Dictation,Text Fader, Special Handicapped Mode, Decimal to Roman
and<br><br>Vice Versa, HTML Editor, Home work helper, 5+ Advanced <br><br>Encryption & Decryption, Encrypt text into picture, mp3 player<br><br>with
lyrics support and many more Conclusion: These functions would allow a person to finish a job quickly and efficiently. With the special handicapped mode,
even the disabled can use it. Powerful encryption and decryption options allow confedential messages to be sent over the network or internet securely.
Awards won at the 2003 ISEF
Second Award of $500 - IEEE Computer Society
2003 - CS056
JOHN SETH THIELEMANN'S OPERATING SYSTEM (JOST OS)
John Seth Thielemann
Cumberland Valley High School, Mechanicsburg, PA, USA
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
This project was created to control the environment of an 80x86 Intel Architecture compatible computer. The project provides a new and original file system,
and is capable of doing basic user functions. Design and Engineering of many important components and structures had to first take place before any real
coding work could begin, to reduce potential problems at a later time. The JOST Operating System was coded in the 80x86 Intel Architecture Assembly
Language Instruction Set, compiled and linked with the Microsoft Macro Assembler. To actually make a JOST Operating System boot disk, another program
had to be written to configure the bootstrap, initialize the floppy disk, and place the Operating Systems Kernel on the disk. The entire Operating System was
built in pieces, as one was completed another would start, reducing countless hours of debugging. To debug problems that would inevitably be encountered
when writing the Operating System, an internal kernel command was created. This command will print important kernel data to the screen, so the data could be
checked for its validity. Thus, if any data had been corrupted or was erroneous, the kernel could be modified to fix the bug. To import a program, or any other
type of file, another internal kernel command was created to copy data from one disk and store it on the JOST OS File System. Thus project was a success far
surpassing my original goals. This project will be the stepping-stone for future modifications and advancements to the Operating System.
Awards won at the 2003 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
Scholarship award of $5000 per year for four years - University of Akron
Second Award of $500 - IEEE Computer Society
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2003 - CS067
GENETIC ALGORITHMS: A NEW APPROACH TO CONTROLLED SWITCHED NETWORKS
Jeff Thompson
Kinkaid, Houston, TX, USA
This project explores the effectiveness of genetic algorithms (GAs) at optimizing controlled switched networks (CSNs). There are no currently known
optimization methods for CSNs that are significantly better than exhaustive searches, making CSN optimizations incomputable (NP-complete). The goal of this
project is to exploit GAs to solve CSN optimization problems in polynomial time.<br><br>Controlled switched networks (CSNs)—a subset of network flow
problems—are defined here as networks of paths with controllable one-way switches at every vertex that regulate traffic on all paths crossing that vertex.
Practical examples of CSNs include street networks, rail systems and oil pipelines.<br><br>To test the application of GAs to CSNs, a study was conducted
using street networks, with traffic light timing cycles as the optimization parameter. Four phases of work were needed to complete this task: the development of
a traffic simulator to evaluate solutions, the development of a genetic algorithm that uses the simulator to find optimized light settings, the tuning of various
parameters within the GA to maximize its performance, and the determination of the order of the optimization process.<br><br>Once the algorithms were
written and tuned, it could be seen that GAs present a very practical and effective solution to CSN optimization, operating in high-order polynomial time.
Although not perfectly efficient, they represent a huge leap from exhaustive searches and make CSN optimization a very computable task.<br><br>
Awards won at the 2003 ISEF
Award of $500 - Association for Computing Machinery
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2003 - CS019
THE DEVELOPMENT, IMPLEMENTATION AND OPTIMIZATION OF A FULL-LATENCY, NON-SEQUENTIAL, PROBABILISTIC, LOSSLESS DATA
COMPRESSION ALGORITHM
Ian Michael Trotter
Berg vgs., Oslo, Norway
I describe a full-latency, non-sequential, probabilistic, lossless data compression algorithm and an implementation of it. The implementation was tested to check
if it was useful for data storage or data transfer over networks. <br><br>The algorithm works by finding the most common combination of three adjacent bytes
and replaces it with a combination of two bytes, or a single byte, that is not represented in the file. This process is entirely reversible, and should reduce the
size of the file. <br><br>The implementation, containing certain performance parameters, was tested by repeatedly compressing files from the Calgary
Compression Corpus [8] with different settings. The optimal parameters were thus found, and subsequent testing could use these parameters and determine
the overall efficiency of the algorithm. <br><br>The testing revealed that this method useful for storage, but not sufficiently fast to accelerate data transfer over
networks with reasonable bandwidth (more than 28.800bps).
2003 - CS302
SOFTWARE FRAMEWORK FOR CREATING MULTI-AGENT SYSTEMS
Vsevolod D. Ustinov, Andrew S. Tatarinov, Vasiliy O. Fedoseev
Lyceum #1533 (LIT), Moscow, Russia
The aim of the project was to create software toolkit for developing and exploring multi-agent systems, which are widely used in AI research. The toolkit
includes: IDE for 2 programming languages (source editor, compilers, virtual machines and debuggers), the tools for multi-agent systems support and 3D
visualization. The open component architecture is used as a main approach. All the components are plug-in modules and their loading and linking is based on
the ideology of Component Object Model. The Core consists of: (1) the Linker module, which is responsible for generic inter-component and inter-language
communication; (2) the event-driven Execution Dispatcher, which controls threads and processes; (3) the objects Database; (4) Structural Event Journaling and
Exception Dispatcher and (5) System Console for user direct control. The Shell consists of: (6) Agent Server, which provides agents’ creation and interaction;
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
(7) the Physical module for modeling of virtual world where agents can move and collide; (8) Visualization module based on OpenGL which allows monitoring
agents’ behavior and (9) the Sound Engine. All the modules are platform-independent and based on Software Abstraction Layer, which virtualizes processing of
OS-specific tasks. The languages used now to describe agents behavior and configuration is Smalltalk and simple C-like language. The translators compile
sources to byte-code, which is executed by Execution Dispatcher through Linker, when agent’s method is called. The system can also support other
programming languages. On the whole, the project's possibilities allow user to create and improve agent-controlling algorithms, and multi-agent systems for
efficient education and research.
Awards won at the 2003 ISEF
Personal computer system - Intel Foundation Best Use of Computing
Second Award of $1,500 - Team Projects - Presented by Science News
Team award of $1000 USD to be divided among the team members - Schlumberger Excellence in Educational Development
2003 - CS014
APPLYING GENETIC ALGORITHMS TO DYNAMIC TRAITOR TRACING
Adam Charles Vogel, St. Charles West High School
St. Charles, MO, USA
Broadcasting electronic content for money is becoming common practice in today’s business world; but unfortunately so is the illegal pirating of that content.
Pirates, also known as traitors, take controlled pay-to-view content and rebroadcast it, without the permission of the content provider. Computer scientists, most
notably Tamir Tassa and Amos Fiat, have introduced dynamic traitor tracing schemes to trace those responsible for the piracy. By using digital watermarks to
create different versions of the same content and giving different groups of users different versions, they are able converge on the pirate(s) based on which
version of the content is rebroadcast. The proposed algorithms have been designed by hand and there is no proof, either mathematical or experimental, that
more efficient traitor tracing algorithms are nonexistent. This research attempted to create more efficient algorithms using genetic algorithms. Genetic
algorithms use concepts that created advanced life to optimize problem specific algorithms, in this case dynamic traitor tracing algorithms. By creating an
“ecosystem” of randomly generated dynamic traitor tracing algorithms and by only allowing the successful ones to survive, the principal of survival of the fittest
selected only the most efficient algorithms. Successful algorithms were bred with each other, creating children algorithms that were a mixture of both. Random
mutations to algorithms also took place, which mimicked evolution in the real world. This process was repeated for generations, until efficient algorithms were
yielded. The generated algorithms did improve through the simulation but were inefficient in comparison to previously hand-made dynamic traitor tracing
algorithms.
Awards won at the 2003 ISEF
Award of $500 - American Association for Artificial Intelligence
2003 - CS065
BALANCING CHEMICAL EQUATIONS? IT'S ELEMENTAL!
Dietrich James Wambach
Guernsey-Sunrise High Shool, Guernsey, Wyoming USA
The purpose of this project was to write a computer program useful for chemistry tasks dealing with elements. The main task is balance simple chemical
equations. I also wanted my program to weigh molecules, display a periodic table, and be able to sort the elements by name, atomic number, weight and so on.
<br><br>C++ is the language that I used for my program, I used Borland Builder C++, as it is a very useful tool for creating a user-interfaced program.<br><br>
For balancing equations, I let the user set up their own equation by choosing elements off of the periodic table. Let’s say that a user enters in an equation that
has five Hydrogen molecules on the left side (reactant side), but only has two Hydrogen molecules on the right side (product side) : H5 + O5 -> H2O. My
program will take that equation and decide that in order for the equation to be balanced, twice as many Hydrogen’s are needed on the left side, and five times
as many Hydrogen’s are needed on the right side: 2H5 + O5 -> 5H2O. Successfully making a balanced equation.<br><br>In conclusion I would say that I met
all my goals with this program. My program does all the tasks that I set out to make it do. Balancing chemical equations can sometimes be a challenge, even to
a human, but my program helps make this a fast and easy process. So now balancing chemical equations is, Elemental!<br><br>
2003 - CS028
I SEE YOU! ROBOTIC SURVEILLANCE USING APPEARANCE-BASED OBSTACLE DETECTION
Laura Anne Wong
Villa Victoria Academy, Ewing, NJ, USA
People use their eye’s everyday to locate objects and avoid obstacles by their appearance. Robots use simple obstacle detection systems such as range
finders, but a camera-based system makes it possible for a robot to employ methods similar to those used by people. This project shows how simple image
analysis, such as the mean color of an area, can provide sufficient navigation feedback for avoiding obstacles and locating objects under surveillance by their
appearance. This more closely emulates the way people avoid obstacles.<br><br>The project employs a battery-operated, mobile robot with two Java
processors coupled with a movable CMUcam camera. The robot detects objects using basic image processing without a frame buffer. No other sensors are
employed. Up to three robots were used in various experiments to locate objects within a room. The environment was controlled to allow the robots free
movement. This included a flat surface, even florescent lighting and objects that could be visually distinguished. No overhanging obstacles were allowed.<br>
<br>The CMUcam generated RGB (red, green and blue) data but experiments showed that conversion to HSI (hue, saturation, intensity) made obstacle
recognition easier. It also minimized the affects of lighting and shadows. Experimental results show that full image analysis is not necessary for obstacle
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
avoidance and object location. The output from the camera was analyzed in a grid to identify objects. Finer grids provided more accurate obstacle
determination. The CMUcam’s low 80 x 143 pixel resolution turned out to be suitable for this task.<br><br>
Awards won at the 2003 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
All expense-paid trip to attend the U.S. Space Camp in Huntsville, Alabama and a certificate - National Aeronautics and Space Administration
2003 - CS047
ASSISTANCE FOR THE VISUALLY AND HEARING IMPAIRED - A COST EFFECTIVE AND PRAGMATIC APPROACH
Alicia Lorraine Worley
Keyser High School, Keyser, West Virginia, United States of America, 26726
The purpose of my project is to provide low cost aids for the visually and hearing impaired, targeted to assist during rehabilitation after a stroke or injury. All aids
must be controlled by a computer that is portable and light weight. A PDA proved to be costly, and most peripherals were cost prohibitive. I selected the Basic
Stamp Computer, which can be programmed via: computer, internet, modem, PDA, and other devices, which allow medical personnel to update programs as
the patient progresses. The computer had to be given the ability to "speak" (voice synthesization), show (LCD screen), vibrator (pager vibrator) to notify the
user of a change in status. Areas covered: (1)Flex Sensors: measures the extremes a patient is to bend a joint undergoing physical therapy, (2)Force sensing
resistors: placed into the patients shoes to ensure proper weight distribution is maintained during rehabilitation, (3)Electrocardiogram Heart Monitor Kit is far
more reliable and cheaper than devices sold in Sports departments, (4) A DS1620 Digital Thermometer chip measures temperatures from -67F to +257F, (5)An
ultrasonic range finder measures distances from 1.2 inches to 3.3 yards, (6)Stress levels (Bio-Feedback) can easily be measured, and (7)Refreshable Braille
units can be constructed using shape memory alloys and lock springs. The areas addressed can be accomplished using the Basic Stamp and inexpensive
solutions, and in most cases eliminate the need for additional medical staff during rehabilitation. This equates to a huge savings in time, money, patient morale
and medical staffing.<br><br>
Awards won at the 2003 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2003 - CS303
ON ENUMERATIONS AND EXTREME VALUES OF ANTI-CHAINS, CHAINS, AND INDEPENDENT SETS OF BINARY TREES
Tsung-Yin Wu, Tsou, Cheng-Hsiao
Taipei Municipal Chien-Kuo Senior High School,Taipei, Taiwan
Binary Tree is one of the most important data structures in the computer science. The concept of Binary Tree can be applied to data sorting and searching. In
this project we study the statistics of Chains, Anti-Chains, and Independent Sets in a Binary Tree. The Chain can be used to sort/search a set of correlated
data; on the contrary Anti-Chain can be used to sort/search a set of independent data, while the Independent Set can be used to classify the data which are
interfering with each other. <br><br>In this project we propose algorithms to enumerate the number of Anti-Chains, Chains, and Independent Sets of a Binary
Tree. Also we calculate the extreme values of these three statistics and discuss the extreme graphs when these extreme values are attained. We prove the
dual property of extreme graphs between Anti-Chain and Chain of a Binary Tree.<br><br>Finally we compare the relation among these three statistics of a
Binary Tree.<br><br>
2003 - CS068
TONGKE'S OPENSOURCE DATA-STRUCTURE & ALGORITHMS LIBRARY
TongKe Xue
Hamilton High, Chandler AZ, USA
Well written graph theory libraries exist: GTL (Graph Theory Library) and LEDA (Library of Efficient Data types and Algorithms), but their non-MIT/BSD licenses
limit their use. Libraries with relaxed licenses exist: BGL (Boost Graph Library), but it's complexity is daunting. To speed up development, there is a need for a
simple, opensource, graph theory library.<br><br>This project addresses this need through the provision of a library released under the MIT X11 license,
written as C++ templates for graph theory, namely: disjoint sets, Dijkstra's shortest path, Bellman-Ford shortest path, breadth first search, depth first search,
connected components, strongly connected components, topological sort, Edmund-Karp maximum flow, and Kruskal minimum spanning tree. Furthermore,
these algorithms have been implemented with nearly optimal big-O running times: disjoint sets through union by rank, Dijkstra's through binomial heap,
Bellman-Ford's in O(VE), {strongly} connected components and topological sort in O(V+E), maximum flow in O(VE^2) and minimum spanning tree in O(E lg E).
<br><br>Lastly, the graph and adjacency list implementations emulate the ::iterator, begin(), and end() styles of STL (Standard Template Library) list/vector,
ensuring simplicity; the algorithms also have built in virtual functions allowing for extension through inheritance and polymorphism (as shown by connected
components through breadth first search; strongly connected components / topological sort through depth first search; etc ..). As a final bonus, the entire library
is written in less than 1000 lines of C+, compact enough for serious developers to read and modify the core components of, if necessary.
Awards won at the 2003 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2003 - CS023
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
THE DESIGN AND DEVELOPMENT OF AN EFFECTIVE AND EFFICIENT ELECTRONIC POLLING SYSTEM
Anthony Ka-Leung Yuen
Oak Grove High School, Hattiesburg, Mississippi, United States of America
In light of recent election problems caused by faulty equipment and serious human error, this study proposes to evaluate current methods of electronic polling,
in addition to developing an effective and efficient polling/voting system based on sound research methodology and computer programming techniques.<br>
<br> In order to test hypotheses that the research set forth, he surveyed 416 samples at two data collection sites. All survey samples were potential voters who
resided within the Greater Hattiesburg Area. The survey, which was designed to determine an electronic ballot interface that would please the majority of
voters, consisted of nine non-threatening questions.<br><br> After eight days and over thirty hours of surveying, all data was collected and the researcher
came to several conclusions after analyzing the data. The researcher concluded that the studied population would prefer a ballot interface with audio-visual
capability using a touch-screen input/output setup that included each candidate’s picture to the right of each candidate’s name but above each candidate’s
party employing either the Times New Roman or Arial fonts at a size greater than 14. It was also concluded that a blue/black/gray color scheme should be used
with a command bar to the bottom of the interface.<br><br> After the researcher came to his conclusions, he drafted a preliminary ballot design in accordance
with the preferences of the majority of those surveyed. Additionally he developed a flow chart illustrating the proper construction of an electronic polling system.
2003 - CS043
PIXELSTAR: BINARY PIXELIZATION AND ARTIFICIAL INTELLIGENCE
Rickey Lee Zachary
Benjamin E. Mays High School, Atlanta, Georgia, USA
The project is the work of a month of implementing binary pixelization, it deals with the implementation of artificial intelligence and how it can be used in
correlation with Binary Pixelization to help robots and machines memorize familiar objects and store them for future use. The process for Binary Pixelization is
that firstly an image is inputted into the initial memory of the robot, through an optical device. Then the image is broken down and analyzed via the individual
pixels. The computer program then assigns each pixel a binary representation for each. Finally, the robot can store this Binary Map. The steps of completion in
this project were to first research AI and the tools and current applications in the field. Next research was done concerning LISP and AI programming. Lastly,
the algorithm for Binary Pixelization was developed in C++ and I implemented that into the AI tools already available. The code that was created worked
correctly but was found to be very large and may have implementation problems due to the size of the file.
2003 - CS012
MULTI-LANGUAGE, CROSS-PLATFORM INTEGRATED DEVELOPMENT ENVIRONMENT
Yu Xuan Zhai
Jinling High School, Nanjing, Jiangsu Province, PRC
The primary purpose of the project is to build a highly integrated development environment (IDE), which can support both .NET and Java 2 platform, which can
support multi programming languages, which can develop programs running on multi operating systems or wireless devices. And it can also provide a
preferable development efficiency. So developers can be free to choose the languages they like, and only one IDE is enough.<br><br>C# language is used to
develop the project. I developed a technology called extensible language module. Different language modules are built based on the extensible language
module technology. They are connected to the IDE by XML (eXtensible Markup Language) to support multi programming languages and platforms.<br>
<br>The IDE can develop programs running on wireless devices (including PDAs, mobile phones) which support KVM (K Virtual Machine). It can support many
languages which contain C#, Visual Basic.NET, JScript.NET, Java and XML. It can also develop programs running on different operating systems such as
Windows, Solaris, UNIX, Linux and Mac OS X.<br><br>The project provides an easy-to-use highly integrated development environment by being fully objectorient, enabling users to write codes which are platform-independent and enabling users to create applications which run on wireless devices. Based on the
extensible language module technology, the IDE can support more new programming languages and platforms in the future.
2003 - CS036
REAL-TIME REMESHING WITH OPTIMALLY ADAPTING DOMAIN: A NEW SCHEME FOR VIEW-DEPENDENT CONTINUOUS LEVELS-OF-DETAIL MESH
RENDERING
Yuanchen Zhu
Shanghai Foreign Language School, Shanghai 200083, China
View-dependent continuous levels-of-detail mesh rendering is much studied in Computer Graphics. However, existing methods that support arbitrary meshes
suffer from either suboptimal rendering efficiency on modern graphics hardware due to irregular output meshes, or suboptimal fidelity due to inflexible sample
distribution in semi-regular remeshing. This project presents a novel solution to these problems.<br><br> I propose to use an adaptively refined progressive
mesh as the base domain of further regular refinements. Special data structures are designed to allow efficient mapping from the dynamic domain to the
original model during regular refinements. A new algorithm for frame-coherent view-dependent optimization of directed acyclic graphs based generic
multiresolution models is also devised. Utilizing inverse linearity of screen-space error w.r.t. distance, the algorithm schedules deferred evaluation and sorting
of nodes to achieve constrained optimality in delta-output sensitive runtime. In addition, geometric deformation is also supported.<br><br> The dynamic domain
provides sample distribution optimized for view-dependency and triangle budgets. The regular refinements ensure local regularity and allow the use of cacheaware vertex arrangements, reducing hardware vertex cache miss-rate by a factor of 2 to 4. With the new algorithm, less than 5% of the domain and,
subsequently, the output mesh is evaluated or modified in typical frames.<br><br> The major contributions of this project are twofold: First, a multiresolution
surface representation providing both optimized sample distribution and hardware friendliness is introduced. Secondly, an efficient algorithm for viewdependent constrained optimization of generic multiresolution models is presented. Combined, the two give rise to a powerful new view-dependent continuous
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
levels-of-detail scheme that excels existing methods in both approximation fidelity and rendering efficiency.
2004 - CS016
UTILIZING A GENETIC ALGORITHM TO SIMULATE LUNAR LANDING
Brandon Rexroad Balkenhol
Arkansas School for Math, Science, and the Arts, Hot Springs AR, United States
This project used artificial intelligence (AI) to play the 1979 Atari game, Lunar Lander. AI was used to develop a set of instructions that would allow the lander to
land on the first landing pad. The type of AI used to accomplish this feat was a genetic algorithm(GA).<br><br> The source code for Lunar Lander was obtained
via the Internet and modified to allow a GA to utilize the game and data collected. The GA used a high selection and high mutation rate to balance out the
elitism selection process used. Four trial runs were recorded each with differing chromosome sizes, one hundred chromosomes per generation over sixty
generations, a crossover rate of 80%, and a mutation rate of 1%. <br><br> Each run performed lead to a non-convergent population. However, each run
produced a best fitness of that above the fitness value 178, the minimum fitness value that reflected a successful landing. Since this value was reached for
each run, it was supported that the GA could produce a successful landing on the first landing pad.<br><br> This project illustrated that AI and more
specifically, a GA, could be successfully used to land the lunar lander on the first landing pad. A future extension of this project could include creating a fitness
function and GA to land the lander on all of the landing pads available. Another future extension could be to use another form of AI to attempt to successfully
land the lunar module.<br><br>
2004 - CS023
THE FLUID DYNAMICS OF AN AIRBORNE CONTAMINANT USING COMPUTER SIMULATION
Amanda Clare Bryan
Coon Rapids High School, Coon Rapids, Minnesota, United States of America
The purpose of this project is to discover how a contaminant flows through the air and what shape of area is most affected through contaminant dispersion,
specifically airborne chemical and biological weapons.<br><br> My hypothesis was: If I compare a CAML_D run simulation with an actual experiment of my
driveway, then in both cases the contaminant will be sucked in and swirl around. Further, after experimenting on my drivway, the model of the US capitol using
CAML_D with the same controls will do the same thing.<br><br> In order to prove of disprove my hypothesis I ran a computer simulation, starting out with a
simple modeling program, running it through several mesh genorators and finally applying the contaminant to the equations of each node. The results upheld
my hypothesis in showing that the contaminant in both cases was sucked in to the U-shaped environment where it swirled around.<br><br> In conclusion, the
fluid dynamicsof an airborne contaminant that in such cases as a chemical weapons attack, the contaminant would be sucked into a U-shaped environment
and not flow past as human logic would dictate.
2004 - CS004
DEVELOPMENT OF GENETIC PROGRAMMING ORIENTED SOFTWARE WITH ADDED POPULATION EFFICIENCIES FOR THE AUTONOMOUS
CREATION OF COMPLEX C-BASED PROGRAMS.
Nicholas Robert Carson
Bayside High School, Palm Bay, Florida, USA
The objective of this project was to design and develop Self Evolving C (SEC), a GP based software capable of autonomously creating programs written in C.
The SEC software was expanded upon from the previous year by improving the code and algorithms used, to enable faster runs and more efficient methods of
creating programs. These include such functions as: Duplication Terminate, Population Reduction, and Optimizer. The addition of the ability to use FOR
statements, Characters and Floats was added as well. Evolving these programs using GP, SEC is able to create working programs that can solve a given
problem. Distributive processing methods were incorporated using Winsock network API over an IP network to utilize a combined force of multiple computers to
solve a particular problem. A function library was designed so that SEC could call upon previously solved problems and use them to assist in the creation of
more complex problems by integrating function calls within generated programs. <br><br> SEC was tested on a multitude of problems of varying complexities.
SEC was successively able to solve every problem given to it by using improved GP methods and added population efficiencies. <br><br>Software designed
and developed for this project could be widely used by software engineers as well as robotic industries and NASA. Applications include: <br><br> -A
programmer's utility that writes code to be integrated into software. <br><br> -A full-fledged artificial software engineer able to create software in its entirety.
<br><br> -A problem solving AI for robotic/NASA implementations to carry out tasks/missions completely independently.<br><br>
Awards won at the 2004 ISEF
Special Achievement Award Certificate - Caltech JPL
All expense-paid trip to attend the U.S. Space Camp in Huntsville, Alabama and a certificate - National Aeronautics and Space Administration
Second Awards of $1,500 - U.S. Air Force
2004 - CS051
INVISIBLE CRYPTOGRAPHY - A MEDICAL APPLICATION
Anthony Paul Chiarelli
St. Thomas More C.S.S. , Hamilton, Ontario, Canada
My Science Fair project focuses on a practical application of Steganography and Cryptography. Because of the importance of security issues in the
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
management of medical information, I suggested the use of these techniques to protect medical images. My main objective was to develop software that would
encode patient information, medical history and a patient's image into different medical images from modalities such as ultrasound, CT, that had been saved in
BMP and AVI formats. I also wanted the program to encrypt the patient information stored in the header of DICOM images using a fairly sophisticated
cryptographic technique. Most significantly I wanted to port the program to a PDA device so that embedded medical images could be viewed securely by
doctors in the field.<br><br>I used a computer, a Pocket PC, Visual Basic as my primary tools. I obtained a selection of medical images of various modalities
and digital formats. I developed algorithms for encoding and decoding four formats (BMP,AVI,WAV,DICOM) and coded these techniques in VB 6. After testing
the program with various data, I proceeded to port the program to the PPC. Once completed the same test files were used on the PPC to see if they would
decode correctly.<br><br>All the methods successively hid the data so that it was unnoticeable, and decoded the data requested. Several medical
professionals indicated confidence in what they saw even with data embedded into the image. They felt the changes did not change the perception of the
image or the diagnosis. <br><br>
Awards won at the 2004 ISEF
IEEE Regional Award of Merit, from individual donations, on behalf of IEEE Regional Activities of $50 each per regional winner. IEEE Regional Activities
regional winners are young technical and gifted students who have demonstrated an aptitude in an IEEE technical area of interest and represent the
transnational nature of IEEE. - IEEE - Oregon and Seattle Sections
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
DuPont will additionally recognize Honorable Mention winners with an award of $500. - National Aeronautics and Space Administration
2004 - CS054
DIGITAL VIDEO COMPRESSION ENHANCEMENT WITH REDUCED PSYCHOVISUAL REDUNDANCY
Wooi Meng Chin
Chung Ling High School, Georgetown, Penang, Malaysia
Video compression is indispensable to web streaming and memory storage. Most compression technology has difficulty to achieve high quality video at lower
bit rates. Apparently, limited transmission bandwidth and system resources often degrade video signals. Thus the goal of my research was to enhance video
compression performance and to improve visual quality. It is hypothesized that reduction in neighboring pixel coding, and human perceptual mechanisms
(psychovisual) redundancy could produce a low-complexity geometry stream for animated visual objects. A set of algorithms is developed to parse bidirectional
interpolation pixels into their characteristic cells, which vary in spectral energy and wavelength. Bits contained in these cells are vectorized and transformed
recursively to identify lower correlations among vector arrays for block filtering. DCT function calculates energy ratios between high spatial frequency and low
spatial frequency, to devote most of the non-absolute value coefficients with the calculated energy ratios. Variable quantization is used to measure sensitivity
and intensity of colors to discard visually redundant data and to restore missing high spatial frequency pixels, presented in mathematical intrinsic. This
approach leads to the ability to compress video data that normally require a large amount of memory to store and high bandwidth to transmit. Results from the
enhanced video compression experiment have attained 0.1 bpp (192 kbps, 25 fps) without noticeable effects comparable to the video compression technique
that achieved 0.5 bpp (1.5 Mbps, 25 fps) in use today. Methods to further reduce video bit rates are also discussed.
Awards won at the 2004 ISEF
IEEE Regional Award of Merit, from individual donations, on behalf of IEEE Regional Activities of $50 each per regional winner. IEEE Regional Activities
regional winners are young technical and gifted students who have demonstrated an aptitude in an IEEE technical area of interest and represent the
transnational nature of IEEE. - IEEE - Oregon and Seattle Sections
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
Scholarship award of $1,500 - National Collegiate Inventors and Innovators Alliance/The Lemelson Foundation
2004 - CS030
SCINTILLATION JAVA BASED 3D MODELING SOFTWARE
Jason Alan Cope
Battle Creek Area Mathematics and Science Center
Scintillation is a 3D developer that runs on any Java3D enabled system. This makes it more compatible than other 3D developers that can only run on specific
operating systems. Scintillation uses a graphical interface to allow easy creation of 3D shapes. There are three 2D graphs. The user clicks and drags the
mouse over these to create a basic shape such as a cube, pyramid, cone, cylinder, or sphere. The three 2D graphs represent the top, front, and right-side view
of the 3D shape. There is also a 3D view of the shape. The user can create points, lines, planes, and circles by clicking on the 2D graphs. Simple shapes can
be combined to create complicated 3D forms. The user can change the color of the shape and add textures to it. The user can also add lights to a shape in
order to illuminate it. After the shape is created, it can be saved to a Java file. The Java file contains the shape as it appears in the 3D view. The Java file can
be compiled and run like any other Java source file. It can also be transferred between programmers. The programmers don’t need Scintillation to edit the Java
file. This is different than other 3D developers where the authoring software is needed to modify the shape.<br><br> Scintillation can serve animators, game
programmers, and industrial design. Each can use Scintillation to create 3D shapes that can be viewed and edited on any Java3D enabled operating system.
<br><br>
2004 - CS041
MODULAR PEER-TO-PEER GRID COMPUTING
Joseph Anthony Crivello
University School, Wiscosin/ Milwaukee, USA
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
This project seeks to provide a feasible method of recycling unused computer processor time. Through research and development, a new and innovative
method of addressing this was designed, Modular Peer-to-peer Grid Computing, or MPGC. This new method breaks the mold by using peer-to-peer networking
(similar to that of file sharing programs), instead of the traditional server/ client model. Additionally, an original way of allowing the grid to quickly adapt itself to
solve new problems, Peer-based Plugin Propagation, or PPP, was developed. By utilizing these two new novel grid computing models, a test program was
successfully coded and used to create a test network at University School of Milwaukee.<br><br>Several iterations of this new model for grid-computing were
created and tested by programming real-world implementations using the programming languages C++ and VB. Throughout the course of the project, it was
concluded that the MPGC networking model combined with the PPP plug-in management system provided the most benefits to organizations running PCs.<br>
<br>These ground-breaking methods of structuring grid networks allow for increased efficiency, and simpler deployment in an organization or home. In
addition, it does not require a potentially expensive, centralized server, and allows for extreme network flexibility. Also, this technology could be adapted to be
used in large arrays of small, cheap wirelessly networked sensors. These sensors could be used for anything from monitoring ocean currents, to tracking troop
movements on a battlefield. The applications for this new grid computing model are far reaching, and could revolutionize many industries and organizations.
Awards won at the 2004 ISEF
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
2004 - CS032
VIVA EVOLUTION A STUDY OF EVOLUTIONARY PROGRAMMING USING CONWAY'S RULES OF LIFE AS A NEURAL NET
Ian Douglas Cummings
St. Joseph's Catholic High School, Ogden, Utah, USA
The purpose of this experiment was three-fold. It tests A) the feasibility of using a cellular automaton as a simulator for a brain, B) compares the effectiveness
of this brain against random movement in similar virtual organisms and, C) examines the viability of evolving these organisms to become a superior species.
<br><br> The experiment was conducted in a virtual environment which existed on a computer. Individual electronic “organisms” were placed in the
environment and tested for fitness on the basis of their ability to move and find food. The control group moved according to a random number generator. One
experimental group used a cellular automata as a brain simulator, but did not evolve. A second experimental group used a cellular automata as a brain and a
genetic algorithm to simulate evolution.<br><br> After multiple tests it was found that the control group averaged a fitness score of slightly better than one. (The
fitness score was derived by counting total food eaten and distance moved.) The first experimental group averaged a fitness score of slightly less than three.
The second experimental group averaged a fitness score of four. <br><br>The project demonstrates that a cellular automata can be used to simulate brain
activity in an electronic organism. This technique is demonstrably more fit than organisms controlled by a random number generator. Further, the effectiveness
of this brain (in terms of survival fitness in a virtual environment) can be improved through the use of evolutionary programming techniques.<br><br>
2004 - CS044
A NEW ALGORITHM FOR RAPID COMPUTER RECOGNITION OF CIRCULAR AND NEAR-CIRCULAR OBJECTS
Jacob Reynolds Darby
Home Educator's Science Partnership, Plano, Texas, USA
I was surprised to learn that computer vision has not progressed much in the past decade; computers cannot recognize many simple objects in real, everyday
images. Shape (or “blob”) recognition is a necessary step toward effective computer vision, and I wanted to develop an algorithm that would rapidly recognize
circular and near-circular objects such as blood cells, traffic signal lights, and human eyes. A crucial step in shape recognition is edge detection, so I wanted to
determine which edge detection method would produce the best input images for my circle-recognition algorithm. After completing my research, I hypothesized
that the Canny edge detection method would produce the best input images for my circle-recognition algorithm.<br><br>I used MATLAB to create edge images
using six edge detection algorithms. I then wrote a C++ program, using functions from Intel’s OpenCV library, that attempted to identify and mark circles on the
edge images created using MATLAB. After my circle-recognition algorithm processed the various edge images, I counted the positives, false-positives, and
false-negatives for each image. After completing my experiments, I analyzed and graphed the results. I concluded that the Canny edge detection method
produced the best input images for my circle-recognition algorithm, surpassing the Sobel, Roberts, Prewitt, Laplacian of Gaussian, and Zero Cross methods.
My new circle-recognition algorithm may be used in various real-world applications—in fields such as medicine, safety, and human-computer interface—to
more accurately identify circular and near-circular objects.
Awards won at the 2004 ISEF
First Award of $1,000 - Eastman Kodak Company
2004 - CS026
PROGRAMMING A TEXT AND OBJECT-BASED SOCCER SIMULATION UTILIZING PHP, MYSQL DATABASING, SQL, JAVASCRIPT, CSS, AND HTML
David Joseph Denton
Central High School Saint Joseph, Missouri United States
Research was conducted to determine if an open-source program could be successfully developed to evaluate user-supplied variables and accurately predict
the outcome of a soccer game. The program was written using Hypertext Preprocessing (PHP), MySQL databasing, JavaScript, Cascaded Style Sheets (CSS),
and Hypertext Markup Language (HTML). Individuals could dynamically create usernames, teams, players, goalkeepers, and playing conditions. The gathered
data was stored in a MySQL database on a web server. This data was then accessed dynamically by users to run the soccer simulation. If-then statements
developed a series of variable integers within the program. First, a Relative Player Analysis (RPA) was developed for each player. These RPAs were used to
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
develop a Relative Formation Analysis (RFA) for each of the four soccer positions for each team. The RFAs were used in an original mathematical equation
that developed the goal-to-goal ratio for each team. Upon completion of the software design, the simulation encompassed over 10,000 lines of original source
code and 38 original files. A null hypothesis was developed stating that the soccer simulation program would show predictive power. 5 Major League Soccer
(MLS) teams, 55 players, 11 goalkeepers, and 10 playing conditions were queried into the database using the original program, where variables were relevant
to official statistics gathered from the MLS website. The simulation was run for 10 previously played MLS games, and statistical analysis was conducted on the
results. The results showed the soccer simulation to indeed have predictive power, therefore failing to reject the null hypothesis.
2004 - CS005
CAN A SOFTWARE PROGRAM BE DEVELOPED THAT CAN ACCURATELY CORRELATE MOUSE POINTER MOVEMENT TO THE HUMAN EYE?
Darash Gayesh Desai
Celebration High School, Celebration FL, USA
Today’s increase in computer related work has lead to a wide variety of software programming to ease complications and increase productivity. Programs such
as Microsoft Word and Microsoft PowerPoint are just a few of the commercial software that exemplify this. With every innovative software program, a new level
of ease is reached in terms of work and productivity. Yet, wouldn’t work be easier if interaction with the computer was more natural? Speech recognition
software is already available, but what is the next step?<br><br>I chose this topic with this question in mind, and what I came up with was a new, more natural
way to interact with one of the most basic components of a computer system—the mouse pointer. With this experiment, my plan was to analyze the accuracy of
using eye movements to interact with the mouse pointer. This experiment was carried out by designing and writing computer software for an eye-tracking
system that would enable one to move the mouse pointer with the ease and comfortability of using one’s eyes.<br><br>My hypothesis was that the system
could be developed such that it yielded very accurate results. This hypothesis was proven unsupported, though not unattainable through the execution of the
software. The final results came to 96.25% for the mouse-moving component of the system, and 55.56% for the image-processing component.<br><br>The
purpose of the project was to develop a system that would both increase efficiency as well as provide opportunity for those that were disabled to use the
computer. Through proceeding with the experiment, this goal was established as achievable, but has not yet been reached. This eye-tracking system must be
taken one step further in optimizing the image-processing component to raise its accuracy, and therefore raise the accuracy of the system as a whole, thereby
reaching the ultimate goal of increased productivity and accessibility.
Awards won at the 2004 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2004 - CS029
NICE GUYS ON THE MOVE: A COMPUTER SIMULATION STUDYING THE EFFECTS OF MIGRATION ON GENETIC ALGORITHMS AND BEHAVIORAL
EVOLUTION
Michael Jason Diedrich
Century High School, Rochester MN, USA
The Prisoner’s Dilemma is an element of game theory which evaluates the success of different types of behavior, particularly useful in the iterated case where
competitors meet more than once and have a memory. It is also an example of a “hard” problem, one where the difficulty in solving increases exponentially as
more variables or data points are added. A computer programming technique known as “genetic algorithms,” simulates evolution in an attempt to evolve good
solutions to hard problems such as the Prisoner’s Dilemma. This experiment evaluated the effect of migration on a computer simulation using genetic
algorithms to propose solutions to the three-player, iterated Prisoner’s Dilemma. It was hypothesized that migration would, over time, produce results that were
as good as or better than those of independently evolving populations. The simulation was written in the Java programming language and simulated two
independent populations evolving for 950 generations; then, the top half of each population migrated to form a new population which evolved for 50 additional
generations. The two original populations were also allowed to continue to evolve for 50 generations to provide a means of evaluating the effects of migration.
Three different environments for dominance were tested: no dominance, cooperate-dominant, and defect-dominant. One hundred trials were run of each
environment, and the hypothesis was supported by statistical analysis of the results.
2004 - CS050
BETTER FIREWALL WITH MESSAGING SERVICE
Usman Fazal
Islamabad Model College for Boys F-8/4 , Islamabad, Pakistan
Extreme security is needed to protect the data from hackers. Therefore this project is about developing computer software namely “Better Firewall with
Messaging Service”. It consists of local port scanner, remote port scanner, and a messenger for Local Area Network. This firewall is actually a client/server
firewall. But still it can be used in home as well as client/server environment. It enables the user to scan all the communication ports. All the communications
are displayed on the screen. A messaging service is also displayed so that users can send their message to a specific computer, to domain or to workgroup. An
attacker’s log is also mentioned so that users can see the attack if there is any. This software is in progress due to lack of time and so is unable to block the
virus type activity. This is the first step towards the world of firewall and is in progress and very soon will be able to quarantine virus type activity in both
environments.
2004 - CS045
THE USE OF COMPUTER SIMULATION TO INVESTIGATE HALLWAY CONGESTION
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Jeffrey Robert Ferman
Berkley High School, Berkley MI, USA
Hallway congestion - traffic jams in high school corridors - is not only an inconvenience, but results in thousands of wasted man-hours every year at high
schools all around the country. Computer simulation provides an ideal tool to study the causes of congestion and to evaluate a number of potential remedies,
quickly and inexpensively, without moving a door or running experiments with 1500 teen-age participants. <br><br> Witness, a computer simulation language
and simulator, was used to construct a model of the high school, and multiple simulations were run demonstrating the validity of the concept. The model
includes accurate dimensions of all the hallways and classrooms on both floors. Each simulated student makes intelligent decisions to optimize his route to his
next class using programmed routing logic. The model can generate randomized student schedules or use actual schedules. To use actual schedules, a C++
program was written to translate the classroom names to Witness code. <br><br> The model was used to evaluate congestion by calculating both the average
time it takes for students to go to class and the amount of time it takes for all students to get to class. This allows evaluation of the effects of various parameters
and behaviors, such as people stopping in the hall, walking speed and number of students. Predictions from the model agree well with actual times obtained by
student volunteers. This model can be used to evaluate and optimize traffic in malls, offices, and other buildings, as well as analyze fire, tornado, and
evacuation procedures.<br><br>
Awards won at the 2004 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2004 - CS033
A NEW APPROACH TO THE IDENTIFICATION OF COMPUTER VIRUSES USING ARTIFICIAL NEURAL NETWORKS
Benjamin Alan Frison
Carlisle High School, Carlisle Pennsylvania, United States
Malicious mobile code (MMC) causes billions of dollars in destruction to information systems each year. Currently, the most widely used method of recognizing
MMC is the scanner model, in which a large database of specific virus signatures is pattern-matched to each scanned file in order to find imbedded MMC. This
method is becoming obsolete due to its inability to detect MMC variants and the exponential growth in number and complexity of MMC. The purpose of this
project is to develop a system of artificial neural networks (ANNs) to differentiate computer viruses, a form of MMC, from benign executables. <br><br> An
unsupervised learning ANN was integrated with a backpropagating ANN in order to categorize and subsequently recognize executables. Viruses and benign
executable assembly operation instructions (without the operands) were mapped to decimal values depending upon the specific instruction and inserted into
the first ANN. The output of the first ANN was a category neurode in the Kohonen output layer, which then fired into the second ANN, which recognized the
category as either a virus or a benign executable. A modified form of backpropagation was used for ANN learning. The ANN tended to have problems learning,
yet using the ANN was efficient in classification and generalization of viruses. The results showed that using ANNs is a pursuable solution to the identification of
MMC.
Awards won at the 2004 ISEF
Award of $500 - American Association for Artificial Intelligence
IEEE Regional Award of Merit, from individual donations, on behalf of IEEE Regional Activities of $50 each per regional winner. IEEE Regional Activities
regional winners are young technical and gifted students who have demonstrated an aptitude in an IEEE technical area of interest and represent the
transnational nature of IEEE. - IEEE - Oregon and Seattle Sections
2004 - CS036
SIS.DOCTOR II
Henrique Führ
Fundação Liberato, Novo Hamburgo, Rio Grande do Sul, Brasil
The main objective of this project is to develop a web applicative to store all the medical informations of a patient in a data bank accessible to a doctor from any
internet connected computer.<br><br>A previous research project revealed the physician's need to have the patient’s medical history before the clinical
examination and an applicative software, called SIS.DOCTOR, was developed to support this.<br><br>Due to the suggestion of doctors, the SIS.DOCTOR
project was inspired a new, efficient and more complete system, called SIS.DOCTOR II.<br><br>The SIS.DOCTOR II project demanded interviews with
doctors, that had analyzed the SIS.DOCTOR’s interface. A literature research about the of web page development’s and JAVA and JSP programming
technologies was done, finding a way to improve the information security and the web page design.<br><br>A database, based on MySQL was created and an
presentation layer was developed using JAVA and JSP technology. This system is hosted by a TomCat webserver. <br><br> The final result was a safe,
intuitive and robust system which has many tools, such as possibility of printing the medical tests, contact list and the use of DICOM pattern to exchange
information with other medical equipment. The patient gains because he receives a more an agile and efficient service, and the doctor has a powerful
instrument to more the diagnosis.<br><br>
2004 - CS057
BRAIN-COMPUTER INTERFACE FOR THE MUSCULARLY DISABLED
Elena Leah Glassman
Central Bucks High School West, Doylestown, PA, USA
Muscular disability may deny computer access. Brain-computer interfaces (BCIs) using brain wave-collecting, non-invasive electroencephalographs (EEGs)
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
may allow computer access by substituting for a keyboard and/or mouse. To build a BCI, I began last year developing the software for interpreting EEG signals
as commands for a computer; it is a framework of signal analysis (wavelets), feature selection, and pattern recognition (Support Vector Machines). This year I
designed EEG-collecting experiments to bypass public-domain data limitations. <br><br> The EEG data from my own experiments shows striking inter-subject
variation and proved more difficult than the public-domain data. It may be due to the equipment (at a university cognitive electrophysiology lab), but it is the
equipment I will probably use in the final BCI. Therefore, to attain desirable accuracy on this data, it is necessary to continue optimizing my software. Extensive
testing shows the Coif3 wavelet and the EEG-specific wavelet I created last year may be optimal wavelets for EEG signal analysis. To optimize the feature
selection, I developed a method for picking the most discriminable, non-redundant features by combining discriminability and correlation into a measurement of
fitness; data shows my method outperforming the two textbook methods it was compared to. Collecting data from my own experiments initiated the
development of software for real streaming data, where the software must actually find brain wave commands in the incoming data, in addition to recognizing
which commands they are. Work is continuing with streaming data, exploring regression algorithms for time-alignment and new mental tasks for use as
commands.<br><br>
Awards won at the 2004 ISEF
Award of $500 - American Association for Artificial Intelligence
Second Award of $500 - Association for Computing Machinery
The IEEE Foundation President's Scholarship of $10,000 - IEEE Foundation
First Award of $3,000 - Computer Science - Presented by Intel Foundation
Scholarship award of $1,000 - National Anti-Vivisection Society
Scholarship in the amount of $8,000 - Office of Naval Research on behalf of the U.S. Navy and Marine Corps.
2004 - CS062
CHAOTIC ENCRYPTION USING ARNOLD'S CAT
Charles Harold Greenberg
Suncoast Community High School, Riviera Beach, Florida, United States of America
The objective was to adapt the Arnold's Cat matrix transformation to a conventional programming language, and utilize the chaotic nature of the transformation
in a highly secure cryptosystem. The software was then evaluated on its efficiency across multiple hardware setups, comparing the bit level of the encryption
with the amount of time necessary for the entire procedure. When completed, the program was found to be highly efficient - the amount of time necessary to
perform the encryption was insignificant when compared to the degree of encryption achieved.<br><br>The program operates chiefly by utilizing the chaotic
nature of the Arnold's Cat transformation - specifically, its criticial dependence upon initial conditions. By introducing a minor element of uncertainty early in the
iterative process, and then allowing Arnold's Cat to hide that uncertainty, the ciphertext is hidden from the plaintext by normal Arnold's Cat transformations. A
number of methods were examined to achieve this aim, including using a session-generated key image to swap the colors of adjacent pixels in the plaintext,
based on the colors of the corresponding pixels in the key.<br><br>The strengths and weaknesses of each method were evaluated and documented. Some,
such as the key image method mentione above, sacrificed a degree of usability in return for a high level of encryption. Other methods were exactly the
opposite. It was found that each method's efficiency increased with the processing power of the computer. As cryptosystems, the methods' use of Arnold's Cat
made them extremely effective and broadly useful in any number of civilian, business, or government applications.
Awards won at the 2004 ISEF
Scholarship award of $20,000 - Department of Homeland Security
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2004 - CS049
WHAT EFFECTS DO HOME APPLIANCES HAVE ON WIRELESS COMPUTER NETWORKS?
Anna Maria Hall
Andrean/Lew Wallace, Gary, Indiana, USA
I tested how microwave technology affected ping speed and data loss in wireless networks: an 802.11b wireless router network; 802.11b XP laptop, PCMCIA
card ME laptop, 98se desktop-802.11b USB adapter, 98se desktop-802.11g USB adapter. The Microsoft ping signaling utility timed packets roundtrip between
devices for each computer to all network devices. Interference sources and microwave technology was introduced to wireless networks at school and 3
households. Network devices were pinged while 1, 2, and 3 microwaves ran simultaneously. I tested other household devices that had the potential to interfere
with radio wave networks; 2.4 GHz cordless devices, and electronic devices. My prediction is that the ping time and data loss will increase or the network will
fail. I tracked network signal and noise SNR strength using Network Stumbler Network Analyzer software. The network became unstable showing peaks during
microwave runs. The Microwave leakage detector detected leakage of minimal 0.35 to 5.45 mW/cm2 only within 2 feet of the device. Microwaves and 2.4 GHz
devices running simultaneously increased ping time drastically in all networks. These home environmental disturbances cause hardware errors, network time
outs and delays, and breakdowns of the wireless network. An 802.11b wireless LAN may encounter interference at home. You probably won't even notice,
unless your microwave and phone are in use all the time. This can potentially cause major network breakdowns that cost profits and inconveniences in
Corporate America and the Military making placement and choices of devices a major concern for reliability of data and security.
Awards won at the 2004 ISEF
Tuition scholarship of $5,000 per year for 4 years for a total value of $20,000 - Indiana University
All expense-paid trip to attend the U.S. Space Camp in Huntsville, Alabama and a certificate - National Aeronautics and Space Administration
Award of $3,000 in savings bonds, a certificate of achievement and a gold medallion - U.S. Army
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2004 - CS042
AN EXTENSION OF THE DINING CRYPTOGRAPHERS PROTOCOL TO MULTIPARTY, SIMULTANEOUS SENDER-RECIPIENT ANONYMOUS
COMMUNICATION
Bruce David Halperin
Half Hollow Hills High School East, Dix Hills, NY 11746
This work develops a new and secure method of allowing many parties to communicate anonymously over the Internet. The protocol developed in this project
breaks a group of hypothetical users into smaller groups using a "distributed moderator." Next, David Chaum's Dining Cryptographers Protocol is performed
within these small groups. The moderator and users then exchange information in a series of steps, allowing each user to determine what messages were sent
by the users but not which user sent each message. This paper includes a detailed explanation of the protocol as well as diagrams, examples, proofs, and
programs written in Mathematica® to simulate the protocol and evaluate its efficacy. It also considers the protocol's many diverse applications in everything
from electronic voting systems to wireless technologies both in the present and future.
Awards won at the 2004 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2004 - CS035
GENETIC ALGORITHMS: THE EFFECTS OF THE MUTATION RATE, POPULATION COUNT, AND ELITISM RATE ON THE EFFICIENCY OF GENETIC
ALGORITHMS
Benjamin Scott Hamner
Academic Magnet High School, North Charleston SC, United States of America
Genetic algorithms are part of the field of evolutionary computation, which was inspired by the process of Darwinian evolution. They work by generating an
initial random population of individuals (possible solutions to a problem), evaluating how close these individuals are to solving the problem, and then performing
the crossover and mutation operations on these individuals over multiple generations, until one individual represents the solution to the problem. Three
variables in a genetic algorithm are the mutation rate, the elitism rate, and population count. Even though genetic algorithms can find good solutions to a
problem much quicker than conventional methods, they can still take weeks run. The purpose of this project was to find efficient elitism rates, population
counts, and mutation rates in order to decrease the time that it takes for a genetic algorithm to run. This was accomplished by recording the amount of time it
took a genetic algorithm to find the solution to a relatively simple problem, the lowest point on a graph, for five different mutation rates (25, 50, 75, 100, and
125), five different population counts (20, 40, 60, 80, and 100) and five different elitism rates (5%, 10%, 15%, 20%, and 25%). This was repeated one thousand
times for each set of mutation rates, population counts, and elitism rates. The best-performing genetic algorithm had a mutation rate of 50 (which means there
was a 2% chance that a bit was going to be mutated), a population count of 40, and an elitism rate of 25%. The population count and mutation rates were the
hypothesized best, but the hypothesized best elitism rate was 10%, while the actual best elitism rate was 25%. This may have occurred because a higher
elitism rate means that there are more genes of the most fit individuals in the gene pool. This data, which shows that genetic algorithms perform the best with
midrange mutation rates and population counts but high elitism rates, can be used to improve the efficiency of genetic algorithms.
2004 - CS064
E-CANISTER A RELIABLE ANTI-SPAMMING SERVER BASED ON VOTING AND LEARNING
Sung-Jin Hong
Daejon Science Highschool, Daejon, Republic of Korea
The loss from the spam mails is becoming intensified. Many companies are looking for an effective solution. Although some mail clients can eliminate spam
mails to some degree, there are limitations on classifying spam mails on the client-side. In case of server-side anti-spamming technology, servers using POP
cannot reflect tastes of each user. To address those problems, I developed a module system, E-Canister, that aids spam filtering in server. To effectively reflect
each user¡¯s need in server, the module system uses a different protocol, which many mail clients support: Internet Messaging Access Protocol (IMAP). ECanister is based on many modules that use machine learning technique. Each module learns from user¡¯s action and decides mail whether user wants the
mail or not. In this paper, the algorithm of each module and the design of E-Canister are described. To examine the efficiency of E-Canister, some experiments
are shown.
Awards won at the 2004 ISEF
Award of $500 - American Association for Artificial Intelligence
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2004 - CS015
CREATION OF A LEARNING TOOL FROM A GAME ENGINE IN C++
Phillip Devin Hopkins
Bentonville High School, Bentonville, Arkansas, USA
The purpose of this project was to investigate the question of whether a learning tool to teach letter recognition might successfully be derived from a game
engine. Because of its efficient organizational properties and object oriented programming, C++ was chosen to carry out this investigation.<br><br> To conduct
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
the experimentation, a simple objective platform game was created. During game production, a physics and graphics engine was designed and implemented to
handle the organization of the game. The addition of individual features to the game was documented through a series of trials. Each trial was a unique addition
or collection of additions to the engine. Errors, including software crashes and control glitches, were recorded. In the next phase, a new game was written using
the engine from the platform game. Trials in this stage consisted of changes to the driver that worked with the engine to compose the learning game.
Alterations to the engine, such as graphical and objective modifications, were recorded.<br><br> Although a few errors occurred during the first phase, most
consisted of simple coding flaws. During the second phase, changes that did not compromise the quality of the game were made. Both the error and alteration
data supported the objective of this project.<br><br> In conclusion, it is possible that a fun, interactive learning aid could be created from a game engine.
2004 - CS027
THE AUTOMATIC INFORMATION CATCHING SYSTEM BY MOBILE PHONE
Azusa Isomura
Tokyo Gakugei University Senior High School Oizumi Campus, Tokyo, Japan
Although you are encountering a problem of information overload in this age of mass media, you could live in informational ubiquitous society with selected
information via this system. It enables you to receive the most recent and critical information you really need through a mail system of a personal mobile phone,
anytime anywhere. It could free you from annoyance of choosing the proper information among overflowed information and it saves time.<br><br>First off, I
combined the existing application with a mobile phone and connected it with data on the Internet by using Visual Basic 6. <br><br>Then I set up some
keywords by which the system could find the proper information on the website automatically. It is important to select the proper website which often get
updated so that you could get newest information. <br><br>The mail software sends the information to your mobile phone when the system recognizes the
keywords on the website. Anyone could use this system by selecting proper websites and setting up keywords according to one fs own demands.<br><br>In
Japan, we have lots of traffic accidents, particularly subway accidents. In these cases, this system is very helpful to get accurate and newest information to
make a decision what we should do. From my experience, this system was great help for me to decide next action in subway accident. I am certain that this
system will be one of the ways to improve mobile phones as a handy terminal.<br><br> You have to remember to get permission for using websites.<br><br>
Awards won at the 2004 ISEF
Trip to China to attend the CASTIC - China Association for Science and Technology
2004 - CS043
COMPUTERS AT WAR: HOW FAR CAN ARTIFICIAL INTELLIGENCE GO?
Brandon James Jackson
Tahoka High School, Tahoka, Texas, USA
The purpose of my project is to further understand A.I. and its limitation. To demonstrate A.I. I have created a slimmed down version of chess: a 3x3 board with
3 pawns at either end. I have created 2 "computers"(actually more like working models in a nueral network).<br><br>Using pressboard, matchboxes to hold
memory, complete set of moves that can occur on a 3x3 board, I began pasting all the moves on the board in numerical order. Odd moves 1,3,5, and 7
represent computer A. Even moves 2,4, and 6 represent computer B. Computers play each other by pulling colored beads from the matchboxes. The color of
the bead indicates the move. Between the 50th and 70th game played you will see that the computers stop winning and losing all together. This stems from the
reward process that teaches the computers. Both computersbegin to make winning moves and this results in unending game ties. During this experiment I have
come to realize that A.I. will always be limited to the nature of it's program. My computers are experts in chess, when asked to play checkers, it's impossible
because they lack the ability to learn outside of their programming. Thus A.I. will always be limited to its programming unless the program is written to show no
limits.
2004 - CS305
ROBOT MOTION PLANNING
Enver KAYAASLAN, Soner YILMAZ
Private Samanyolu High School , Ankara , Turkey
Robot technology that comes to the fore and obtains widely usage area recently. Related to the robot technology, Robot Motion Planning is one of the most
challenging problems of computer science, which is interested by many of the research groups. We focused on that topic with the purpose of providing
solutions to the problem with simple and efficient methods.<br><br> In our project we designed the motion of a robot in two-dimensional plane. It starts moving
from a start point to an end point, while overcoming the obstacles it encounters. It can both move and rotate. We are trying to solve this problem by probabilistic
path planning methods. By this method we make our robot find better paths. <br><br> We divide the area into regions, which is formed by rotations of the robot
for all angles, by taking the center point, which is also the rotation point of our robot, as reference point. By merging these regions smartly, we find a graph for
that state of robot at each angle. By overlapping all graphs we obtain a big graph, in which our robot can find a better path.<br><br> We solved this problem by
probabilistic path planning method. Due to this method we use time and memory in a more efficient way. By the way, by improving a little bit our project can be
used in many areas of robot technology. <br><br>
Awards won at the 2004 ISEF
Team First Award of $500 for each team member - IEEE Computer Society
2004 - CS055
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
METHODS OF PURIFYING OF THE NEAR-EARTH SPACE FROM SPACE DEBRIS
Timur Khamziyev
Kazakh-American School, Almarty, Kazakhstan
The project proposes a ground for appropriateness of space development. The Project shows progresses and perspectives of Cosmonautics as well as
ecological problems of near-earth being result of such development.<br><br>There were given both passive active methods of pf purifying of the near-earth
space from space debris. Special attention is giving to geostationary orbit.<br><br>We offer hereby our own method of purifying by means of arbalest that uses
energy extracted from what is lost as a result of cosmonaut's daily coaching.<br><br>
2004 - CS021
MODELING COMPILER REGISTER ALLOCATION WITH A GRAPH COLORING HEURISTIC
Matthew Peter Klaber
Mankato West High School, Mankato, MN, USA
A compiler assigns variables to processor architected registers. If all registers are full, it instructs the program to use RAM, which is slower. Register allocation
can be represented as graph coloring in which each node on an interference graph represents a variable to be stored. Edges connect nodes that are live at the
same time. Each node is colored so that no nodes connected by an edge share the same color, i.e. the same registers. This exeriment uses graph coloring
without limiting the number of colors and tests how close compilers come to an optimal register allocation. The graph coloring method most compilers use is
modeled by David W. Matula's smallest-last coloring heuristic, which itself is based on A. B. Kempe's 1879 attempted proof of the Four Color Theorem. Two
compilers' register allocators produced 28,288 interference graphs to be tested. These graphs and their chromatic numbers (optimal colorings) were provided
by research advisor Max Hailperin. The chromatic numbers were determined using Olivier Coudert's Scherzo program.<br><br>A Java program was developed
to test the graph coloring heuristic on the data set. The resulting data showed the heuristic performed optimally 92% of the time. When off, it was mostly off by
just 1 color. Very few times it needed as many as 3 extra colors.<br><br>These results lead to the conclusion that the fundamental graph coloring heuristic
used in register allocation performs rather well. This would lead compiler programmers to not implement a program such as Scherzo into a compiler.<br><br>
2004 - CS019
THE BABELFISH SINGS
Nikhil Krishnaswamy
Piedra Vista High School, Farmington, New Mexico, United States of America
The purpose of this research is to determine a method of integrating numbers, musical tones and spoken language, and to determine the accuracy of using this
system to predict the pronunciation of written words.<br><br> An examination of the current conventions of the IPA (International Phoenetic Association) shows
twelve divisions in consonant articulation, as well as twenty-four different vowel sounds. Therefore, a system that utilizes a duodecimal base (base 12) number
system could be used to write spoken language, numbers, and, because there are twelve inidividual notes in an octave in the Western scale, music as well.
<br><br> A computer program written to translate written English into both the new writing system and musical tones utilizing the rules of English spelling
without a preprogrammed lexicon that has about seventy-five to eighty percent (75% - 80%) accuracy could be regarded as a success, due to irregularities in
modern English orthography.<br><br> The entire research and development process can be done with a computer, so no specific materials other than a C
compiler are needed.<br><br> A duodecimal number system was developed, and each number was assigned a musical frequency, as well as several different
types of phonemes, each with its own diacritic to alleviate confusion. When writing muxic, each diacritic represents a differnt type of tone. Vowels represent
vocal tones, and consonants represent instruments.<br><br> A program to translate words or numbers to musical tones needs to the following components:
functions to read input, a function to convert a decimal number to base 12, a function to process the pronumciation of text and convert it to IPA notation, a
function to convert the IPA notation to the Duodecimal Alphanumeric System previosly described, and finally, a function to convert text in the new writing
system to musical tones.<br><br> The results supported the hypothesis. An average of 80.85% success was achieved with a low of 72.18% and a high of
84.29%. Out of 1269 total phonemes, 243 were incorrect.
2004 - CS024
INTERNET FOR THE PEOPLE DESIGNING A FEASIBLE METHOD TO PROVIDE FREE INTERNET ACCESS TO RURAL COMMUNITIES
Richard Taylor Kuykendall
Moorefield High School, Moorefield WV, USA
PURPOSE: My project attempts to address the concern of the gap between those who have Internet access and those who do not have access. With
broadband access free of charge, and widespread, data flow will increase and allow for knowledge to be spread more readily throughout the world. Wireless
means will also allow for solutions to environmental concerns. <br><br>PROCEDURES: In order to determine feasibility I will develop a theory for creating a
wireless network, design an ideal topology, apply theory a plan for an actual town, build and test antennas, and create a book that will allow people to easily
follow directions to create their own wireless networks. My theory will be physically proven in any area where success is doubtful. <br><br>DATA: My data
which includes too much to list here, includes antenna success, a theory which I have created for developing a wireless network, ideal locations for the example
town of Moorefield, WV, and a developed topology for rural wireless networks. <br><br>CONCLUSIONS: According to my data it is possible for one to create a
wireless network using appropriate antenna designs, correct topologies, and theory I have produced. One must also keep physical topology in mind when
creating a network. <br><br>APPLICATIONS<br><br>Personal Computing- People will be able to connect for free.<br><br>Commercial- Businesses can cut
costs by using wireless networks.<br><br>Military- Ad hoc wireless networks allow for quick and easy data transmission.<br><br>Government- Information can
be passed easily amongst government groups.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2004 - CS040
COST-BENEFIT ANALYSIS OF INTERNET MARKETING
Justin Michael Lebar
Pine View School in Osprey, Florida, USA
Many search engines sell space for ads that are shown when users search for specific keywords and are paid for on a per-click basis. I set out to determine
which set of diabetic supply keywords had corresponding ads that caused the highest percent of people who clicked on the ad to sign up for the website’s
service. From this information I calculated the cost-benefit ratios for each word, determining which words were the “best” of the set tested. I then attempted to
find some pattern by which I could pre-determine which keywords would have good cost-benefit ratios without running an extensive analysis.<br><br> To
collect this data, I wrote a system in Visual Basic .NET utilizing Microsoft’s ASP.NET and SQL Server 2000 to track a major diabetic company’s internet
marketing campaign. Over the test period, I had mixed results: I found that irrelevant keywords, such as “diabetic diet,” were expensive and delivered few
orders. However, there were great differences in performance between relevant keywords: some keywords, such as “glucometer,” one brand’s term for a
glucose-monitoring device, were expensive and delivered few orders. Other keywords, such as “accu chek,” a brand of glucose meters, delivered many leads
at a reasonable price. It seems as though the only sure way to determine which keywords will deliver orders at a reasonable price is to run an analysis program
like mine.<br><br> By using my system, the company has been able to reduce its advertising costs by about 50% without seeing a significant drop in orders.
2004 - CS037
THE ENHANCEMENT OF PASSWORD SECURITY SYSTEM USING KEYSTROKE VERIFICATION
Taweetham Limpanuparb
Mahidol-Wittayanusorn School, Salaya, Phuttamonthon, Nakhon Pathom, THAILAND
At present computer security is increasingly important as global access to information and resources becomes an integral part of many aspects of our lives.
Reliable methods for user verification are needed to protect both privacy and important data. Password verification has been used for a long time. Due to its
simplicity and affordability, this technique had been very successful. However, processing power of computers has increased dramatically which has made the
use of password verification insufficient. Enhancement of security in password verification can simply be conducted by increasing password length, changing
passwords more often, and/or using meaningless strings as passwords. Nonetheless these methods may not work efficiently because of human memory
limitation. <br><br> Keystroke verification, a biometric method, is based on user typing parameters which can be defined as key down time and inter-key time.
These two parameters are collected while users type in their passwords. Novel statistical methods are proposed here to determine whether the keystroke data
actually belong to the user. <br><br> The experiment was conducted using volunteer subjects. They participated in this research for a few months by using
computer program written for collecting their keystroke data. Results are analyzed and concluded in terms of errors and the factors corresponding to the errors.
The verification errors, namely type I error and type II error, are considered against several factors e.g. password length, duration participating in research.<br>
<br> In summary, overall efficiency of the system is acceptable. To tighten the security, the proposed key-stroke approach can be applied (as a complement to
the existing technique) to verify any user in the computer system. <br><br>
Awards won at the 2004 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
2004 - CS025
WILL THE NEXT INTRODUCED CPU MAKE A DIFFERENCE?
Ariel Llizo
Doral Academy Charter High School, Miami, Florida, USA
The purpose of this experiment is: (1) to determine whether certain processing functions of a computer (Audio Conversion; 3D Vector Calculation) increase in
performance progressively or exponentially as processor speed is increased and (2) to determine how those processing functions will perform once a new,
hypothetical processor is introduced. For both tests, Hypothesis 1 was invalid because the graph of Audio Conversion performance was represented by a
function that included a power of x, while 3D Vector Calculation was represented by a hyperbolic line (both were hypothesized to be represented by straight
lines). The second hypothesis resulted valid (performance for the computer with the hypothetical processor resulted within the range that was hypothesized;
refer to data).<br><br> Nine computers with different processor speeds were benchmarked 10 times for each individual test using PCMark2002. Data was
averaged and separated into 2 groups of tests. When graphed, the results demonstrated the above increases in performance (refer to results in Paragraph 1).
<br><br> The most likely results for the performance of the hypothetical processor were determined. The results were: 114.157227654 KB/s for Audio
Conversion and 65.8399470722 fps for 3D Vector Calculation on a PC with 3.16 GHz of processing speed.<br><br> Fluctuations in the data were possibly
caused by RAM differences, or by enhancements made by the manufacturer. This experiment could be improved by conducting more trials per computer, or by
comparing the data gathered to a similar experiment using stricter methods (computers with VERY similar specifications other than processing speed).
2004 - CS048
ADAPTIVE INTERFERENCE REJECTION IN WIRELESS NETWORKING
Tarang Luthra
Torrey Pines High School, San Diego, CA, United States of America
Wireless networking is fast becoming ubiquitous. With the crowding of airwaves, the interference from unwanted sources is increasingly impacting how fast and
far away one can communicate. The focus of this project was on developing and simulating an adaptive algorithm in which an antennae array in a receiver can
automatically adjust itself to provide the optimum rejection of interference without knowing which direction it is coming from. A new scheme was developed
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
which further helped in rejecting the interference that came from a direction closer to the source. A computer simulation using Java programming language was
then completed to conduct simulated experiments and characterize the impact of various parameters. <br><br> Comparisons among single element systems
currently used in many products, the fixed (non-adaptive) array processing method, developed by me last year, and the current adaptive method were made.
The adaptive processing method performed the best because it had the ability to automatically adapt to the environment and reject interference without
knowing the interferers’ angles. It could reduce the interference up to a factor of 50 over a single antenna system used in various products today. Also,
compared to the fixed processing method, the adaptive technique reduced the interference up to a factor of 10, or achieved similar performance with 1/2 the
number of elements. <br><br> This work will allow one to either significantly increase the performance and usefulness of a wireless network or to develop costeffective products by using a lesser number of antennae elements for the same performance.<br><br>
Awards won at the 2004 ISEF
Tuition scholarship of $105,000 - Drexel University
Scholarship award of $5,000 per year for four years - Linfield College
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2004 - CS047
T-WEB: THE TALKING WEB BROWSER FOR THE BLIND AND VISUALLY IMPAIRED
Ahmad Shakir Manshad
Las Cruces High School, Las Cruces, NM, United States of America
Learning to use the internet can be difficult for people who are blind or have visual impairments. The only way to help these people surf the internet easily is by
using a web browser that is specifically designed for the blind and visually impaired. There are several web browsers for them on the internet; I downloaded five
different web browsers. Which web browser is the best? How can we make a web browser that can compete with the ones that were downloaded?<br><br>
The tests were conducted by fifteen different test cases, and each web browser had to go through all of them. The test cases were gathered from five main
tasks that people perform regularly when using the internet.<br><br>After the testing, IBM HPR 3.0 was proven to be the best web browser for the blind and
visually impaired. All the web browsers had the same flaws: none could handle a Java Applet or Flash animation, and some couldn’t read frames. The web
browser that I created, T-Web, turned out to be the second best, and it’s more user-friendly than IBM HPR 3.0 or any other web browser. T-Web and IBM HPR
3.0 were tested by a blind person. T-Web was a favorite. The blind person agreed that T-Web is simpler to use because of its easy-access keyboard controls;
speech output quality, including sound volume and information about active objects on a webpage. IBM HPR 3.0 was considered to be confusing and needed
to provide easier interaction with the user.
Awards won at the 2004 ISEF
IEEE Regional Award of Merit, from individual donations, on behalf of IEEE Regional Activities of $50 each per regional winner. IEEE Regional Activities
regional winners are young technical and gifted students who have demonstrated an aptitude in an IEEE technical area of interest and represent the
transnational nature of IEEE. - IEEE - Oregon and Seattle Sections
2004 - CS012
AN INVESTIGATION & MEASUREMENT OF GLOBULAR CLUSTERS
Elan Virginia Mitchell
Benjamin E. Mays High School, Atlanta, Georgia
The title of my project is An Investigation & Measurement of Globular Clusters. I am determining which globular cluster of (4) chosen clusters is closest to the
Earth. I hypothesize that all globular clusters are about the same distance vertically and horizontally since there are all gravitationally bound objects. My
procedure was first to select (4) globular clusters from an astronomical archive, one that was commonly known. Secondly, was to use the Steward Observatory
21’inch telescope at the University of Arizona. Thirdly, I was to use CCD-Imaging of clusters: M3, M92, M13, and NGC6626. Next, I processed my photographs
from the University of Arizona’s computer laboratory. Then, I measured the clusters north, south, east, and west. I recorded data from those measurements.
Lastly, I created a Java program that would first give me the x-max’s minus x-min’s and y-max’s minus the y-min’s. Secondly, my Java Program would allow me
to average my clusters followed by a ranking/sorting system. Finally, my program would allow me to calculate distances of newly discovered globular clusters
from the Earth. In conclusion, I used the distance-ratio formula to calculate the real-distance in astronomical units that allowed me to conclude that globular
clusters only differ 20% in size horizontally and vertically.<br><br>
2004 - CS002
THE SARS EPIDEMIC: USE OF A COMPUTER MODEL TO PREDICT THE VALUE OF CONROL MEASURES
Ryan Jonathan Morgan
Saint Edward's School, Vero Beach, Florida, USA
Severe Acute Respiratory Syndrome (SARS) caught the attention of the World last year when the epidemic spread out of China, infecting over 8,000 people
with a mortality rate approaching fifteen percent. Questions about how the epidemic can be controlled cannot be answered experimentally, but they might be
approximated by a computer simulation of the epidemic. <br><br> A computer model of SARS was developed and used to test the hypothesis that the
commonly used techniques can attenuate the epidemic, but that more drastic control measures such as contact quarantine and mandatory travel restrictions
are necessary to keep the mortality rate at low levels.<br><br> To test this hypothesis, a computer program was written that can simulate the spread of the
SARS epidemic. The program was verified by comparing the results it obtained to those reported from the Hong Kong outbreak of 2003, with excellent
concordance. <br><br> This model was then used to examine the effects of the use of surgical masks, early isolation of patients, and voluntary travel
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
restrictions on the course of the epidemic. Each of these modified the spread of the disease and reduced the mortality rate. A combination of these methods
proved even more effective than any individual measure alone. Simulations of the more drastic measures of contact quarantine and mandatory travel
restrictions demonstrated that the mortality rate can be reduced to nominal levels, although the social cost would be great. The development of an effective
vaccine could control the epidemic equally well and would be less disruptive to society.<br><br>
2004 - CS065
FTP PROXY IMPROVEMENT MODEL
Jung Hyun Nam
Wooshin High School, Seoul, South Korea
The FTP(file transfer protocol) is a protocol that is used for file exange through Internet. Lots of software and devices use FTP as a basic protocol. Although ten
years have passed from the introduction of FTP, it still has a lot of problems such as security and file processing. In this paper, I treat these problem in new
viewpoint. I describe the theory and necessity of the FTP Proxy Improvement Model (FTP-PIM), which is a new architecture that can solve these problems at
once. And I compare my FTP-PIM proxy model with a lot of existing solutions, and demonstrate how superior my model is. <br><br>
2004 - CS046
COLLABORATIVE NURSE SCHEDULING: A NOVEL APPROACH TO IMPROVE UTILIZATION OF EXISTING NURSE RESOURCES
Sonia -- Nijhawan
duPont Manual High School, Louisville KY, USA
In light of the growing nurse shortage that has affected hospitals all over the United States, a collaborative nurse scheduling system was created that connects
hospital requirements with registered nurse (RN) requests. The scheduling system helps to solve the nursing shortage by efficiently scheduling the current pool
of licensed RN’s through the use of micro-shift scheduling, collaboration of hospitals or community-wide scheduling, and easy-access to the scheduler (via cell
phone, e-mail, ASP form). The scheduling system’s front end and back end were created using Active Server Pages, HTML, JScript, VBScript, and the SQL
Server. When matching data, the scheduler takes into account location, specialization, shift, pay, date, a quality matrix, and heuristics. After the prototype of the
system was developed, several hospitals reviewed the scheduler and gave suggestions for improvement. It was found that each hospital would be sending
large amounts of hospital request transactions per hour. Therefore, several transfer protocols were added to the system to allow for the transfer of large sets of
data and a unique XML protocol (named nXML) was developed that defined hospital request and scheduler response transactions. After the completion of the
collaborative nursing scheduler, data was accumulated from local Louisville hospitals. The data was used to find and predict the effect of the scheduler on the
present nursing shortage and it was found that the scheduler helps in minimizing the gap between nursing supply and hospital demand.
Awards won at the 2004 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2004 - CS056
EFFICIENT LINE-OF-SIGHT ALGORITHMS ON A DELAUNAY TRIANGULATED IRREGULAR NETWORK
Elam James Oden
North Alabama Friends School, Huntsville, Alabama, USA
The purpose of this project was to find the fastest algorithm to determine if one point could be seen from another over a Delaunay triangulated irregular
network. If the line between the two points is unobstructed it is called the line of sight. Prior to this project, I had very little experience with computer
programming and 3-D (3-Dimensional) graphics. Therefore, one of the main goals of this project was to gain a firm understanding of both programming and the
underlying concepts of 3-D computer graphics.<br><br>I began this project by creating a 3-D graphics engine from scratch with which to render and manipulate
3-D points and shapes. In the process of building the graphics engine, I learned about the math that is required to successfully perform calculations on objects
in three dimensions including vector math, transformations, translations, and the conversion of 3-D points to 2-D (2-Dimensional) points.<br><br>I programmed
a terrain editor to create terrain models by allowing a user to select spot elevations. I then included an algorithm to perform a Delaunay triangulation on those
spot elevations. Loading the resulting triangles into the 3-D engine, I could determine if the line of sight intersected the surface by testing the line of sight for
intersection with any of the triangles. Various methods where created to speed up the process that eliminated having to test some of the triangles. I tested each
method in several situations and recorded the results to determine which one was the fastest.<br><br>
2004 - CS010
USING JAVA TO COMMUNICATE ACROSS MULTIPLE PLATFORMS
Daniel Jonathan Parrott
Bartlesville High School, Bartlesville OK, United States
Executable programs are compiled forms of code. When run on a computer, they have full access to the system, meaning that they can perform such malicious
functions as deleting files. They are also specific to individual platforms; a program designed on Windows will not work on a Macintosh. Additionally, executable
programs often require additional code libraries so that installation requires more than just one file. This results in distribution problems that arise when trying to
develop a strong user base.<br><br> To circumvent these problems, it was believed that a Java applet could be used in place of an executable program.
Applets run inside web pages and rely on the Java Virtual Machine (JVM) to function. As a result, no additional library files are required. At the same time, they
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
are also platform-independent; any system that supports the Java framework can load and run an applet.<br><br> Client and server programs (used in chat
systems) were developed to demonstrate the functionality of such a Java applet. The client applet was capable of running on both Macintosh and Windows
systems. When connected to the server, it was able to send encrypted messages in real-time to a client running on a different platform. Also, the client program
took on the native appearance of its current operating environment. Since it operated under security restrictions, it was also incapable of harming the system.
All of this supports the idea that most executable programs could be replaced with Java applets to improve problems inherent in distribution.
2004 - CS007
ACADEMIC NETWORK: INTERNET HOMEWORK MANAGEMENT FOR TEACHERS, STUDENTS, AND PARENTS
Roger Warren Pincombe
McIntosh High School, Peachtree City, GA, USA
Students must keep track of many assignments, tests, and projects, some of which may be forgotten, resulting in lower grades. Current assignment recording
methods do not seem effective, even with parent involvement. It appears there is a need for a thorough and efficient homework assignment communication tool
that can be used by teachers, students, and parents.<br><br>The primary objective of this engineering fair project is to develop a tool that enables students to
maximize learning by providing thorough assignment information and allowing for parent involvement. The goal is to develop a standardized assignment
management tool using an internet application that may be implemented nationwide by school systems.<br><br>If a tool like this is developed, then students’
assignment completion rates will increase.<br><br>The Academic Network was designed as an internet application using a server programming language
platform called Active Server Pages. Access database is used to store assignment information. The program provides simple instructions that lead users
through setup and usage. Students and parents can view assignments for all classes on one page in a calendar or list view. Teachers can manage multiple
classes with separate assignment listings. Privacy is protected by a secure password system.<br><br>After program implementation, a survey conducted with
teachers, students and parents found that 94% agreed that Academic Network was an effective assignment communication tool. Of teachers who require
students to use the new system, assignment completion rates increased substantially. Since Academic Network was developed as an internet-based
application, students nationwide may benefit from its use. <br><br>
Awards won at the 2004 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2004 - CS001
SIX GIGAHERTZ AND BEYOND
Andrew Williams Powell
Trinity Catholic High School, Ocala, Florida, United States
In this study it was hypothesized that generic personal computers (PCs) could be assembled from readily available parts and linked together using standard
Ethernet technology. It was further hypothesized that the three machines could use Linux Mandrake 9.0 Clic software as an operating system and that with this
operating system the machines could be clustered together to greatly enhance the processing speed of the system.<br><br>Computer components were
ordered from a computer supply house. These components were assembled into three identical PCs using Athlon processors running at 3 Gigahertz. The three
machines were linked together with proper Ethernet equipment. Linux Mandrake 9.0 Clic software was installed on the three machines and configured so that
one machine became the server and the other two machines became the nodes.<br><br>To test the performance of the system the Povray Chessmark
benchmark software was run on the system first with one node operating then with both nodes working together. It was found that with one node operating the
system could complete the processor intensive Chessmark drawing in 11 min. and 46.32 seconds while with both nodes the performance approximately
doubled to 5 min. and 54.97 seconds.<br><br>Based on the above results it was determined that the hypothesis was upheld. Thus, generic PCs with readily
available processors and components can be linked or clustered together in such a way that their processing speeds become additive. This technology can be
used to create clusters of PCs with speeds rivaling that of much more expensive mainframe machines. <br><br>
2004 - CS018
COMPUTATIONAL MODELING OF PHYSICAL INTERACTIONS IN THREE DIMENSIONS
James Halligan Powell
Terre Haute South Vigo High School, Terre Haute IN, USA
A Windows program is designed and developed in C++ to provide three dimensional modeling and rendering of various physical interactions of spherical
objects, and to examine the accuracy of the model using different algorithms, floating point precision, and other settings. The software is designed with a userfriendly interface that allows the user to accurately simulate complex interactions of a large number of objects and forces.<br><br>The basis of the model lies in
Newton’s laws of motion. Physical laws are applied to evaluate the force on an object at a given instant, and an algorithm is used as a numerical approximation
to the second order differential equation that governs the object’s motion. The software allows the user to provide relevant data for each object, such as mass,
charge, position, initial velocity, and radius. The user may add springs with any spring constant and equilibrium length, they may add planes with any normal
and coefficient of restitution that act as boundaries for the objects, and they may provide forcing functions for specific objects (these allow the user to use many
standard math functions, including Heaviside and delta functions). The user may start and stop the simulation at will, and observe the position, velocity, and
force acting on the object at any time.<br><br>The program was accurate to up to eight digits over small intervals of time. Rounding error caused some
inaccuracy, as well as the absence of a perfect technique to approximate numerical solutions to the second order differential equations governing motion.
2004 - CS013
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
PESDOC - A FARMER-FRIENDLY SOFTWARE FOR PEST DIAGNOSIS OF CROPS
Kanishka Raajkumar
Shree Baldevas Kikani Vidyamandir Matric Hr Sec School, Coimbatore, India
PESDOC is aimed at helping farmers in remote rural areas to seek effective solution for pest control in crops (at present for Cotton, Sugarcane and Rice), by
providing a software that could analyze symptoms, identify pests and suggest remedial measures. The system currently covers 30 pests (around 10 pests per
crop).<br><br> <br><br>PESDOC was created in Visual Prolog 5.2. The knowledge acquisition module was created in Visual Basic 6.0. The animation and
audio was developed using Macromedia Flash 5.0 and Windows Sound Recorder utility respectively.<br><br> <br><br>It is in the form of a desktop pest expert
with which a farmer, who has no formal training in pest control, interacts by responding to a combination of queries. The software employs rule – based
reasoning represented in Horn clauses. Knowledge was acquired from texts and human experts in the field of plant pathology and entomology. PESDOC
employs backward chaining control strategy. Special interface has been provided for updating the knowledge base. Pest management procedures have been
illustrated using audio and video files developed in local language for the benefit of rural farmers. <br><br> <br><br>When tested PESDOC performed as good
as human experts within its narrow domain. Thus it can be concluded that PESDOC is able to provide effective diagnosis of pest damage and hence suggest
treatment procedure to farmers. It can also help plant experts in diagnosis.<br><br>
Awards won at the 2004 ISEF
Second Award of $500 U.S. Savings Bond - Ashtavadhani Vidwan Ambati Subbaraya Chetty (AVASC) Foundation
2004 - CS013
PESDOC - A FARMER - FRIENDLY SOFTWARE FOR PEST DIAGNOSIS OF CROPS
Kanishka Raajkumar
Shree Baldevdas Kikani Vidyamandir Matric. Hr. Sec. School, Coimbatore, India
PESDOC is aimed at helping farmers in remote rural areas to seek effective solution for pest control in crops (at present for Cotton, Sugarcane and Rice), by
providing a software that could analyze symptoms, identify pests and suggest remedial measures. The system currently covers 30 pests (around 10 pests per
crop). <br><br>PESDOC was created in Visual Prolog 5.2. The knowledge acquisition module was created in Visual Basic 6.0. The animation and audio was
developed using Macromedia Flash 5.0 and Windows Sound Recorder utility respectively.<br><br>It is in the form of a desktop pest expert with which a farmer,
who has no formal training in pest control, interacts by responding to a combination of queries. The software employs rule – based reasoning represented in
Horn clauses. Knowledge was acquired from texts and human experts in the field of plant pathology and entomology. PESDOC employs backward chaining
control strategy. Special interface has been provided for updating the knowledge base. Pest management procedures have been illustrated using audio and
video files developed in local language for the benefit of rural farmers. <br><br>When tested PESDOC performed as good as human experts within its narrow
domain. Thus it can be concluded that PESDOC is able to provide effective diagnosis of pest damage and hence suggest treatment procedure to farmers. It
can also help plant experts in diagnosis.<br><br>
Awards won at the 2004 ISEF
Second Award of $500 U.S. Savings Bond - Ashtavadhani Vidwan Ambati Subbaraya Chetty (AVASC) Foundation
Award of $5,000 - Intel Foundation Achievement Awards
2004 - CS013
PESDOC - A FARMER-FRIENDLY SOFTWARE FOR PEST DIAGNOSIS OF CROPS
Kanishka Raajkumar
Shree Baldevas Kikani Vidyamandir Matric Hr Sec School, Coimbatore, India
PESDOC is aimed at helping farmers in remote rural areas to seek effective solution for pest control in crops (at present for Cotton, Sugarcane and Rice), by
providing a software that could analyze symptoms, identify pests and suggest remedial measures. The system currently covers 30 pests (around 10 pests per
crop).<br><br> <br><br>PESDOC was created in Visual Prolog 5.2. The knowledge acquisition module was created in Visual Basic 6.0. The animation and
audio was developed using Macromedia Flash 5.0 and Windows Sound Recorder utility respectively.<br><br> <br><br>It is in the form of a desktop pest expert
with which a farmer, who has no formal training in pest control, interacts by responding to a combination of queries. The software employs rule – based
reasoning represented in Horn clauses. Knowledge was acquired from texts and human experts in the field of plant pathology and entomology. PESDOC
employs backward chaining control strategy. Special interface has been provided for updating the knowledge base. Pest management procedures have been
illustrated using audio and video files developed in local language for the benefit of rural farmers. <br><br> <br><br>When tested PESDOC performed as good
as human experts within its narrow domain. Thus it can be concluded that PESDOC is able to provide effective diagnosis of pest damage and hence suggest
treatment procedure to farmers. It can also help plant experts in diagnosis.<br><br>
Awards won at the 2004 ISEF
Award of $5,000 - Intel Foundation Achievement Awards
2004 - CS304
FOLD AND CUT COMPUTATIONAL ORIGAMI IN C++
John Sweeney Reid, James Wesley Colovos
Career Enrichment Center, Albuquerque NM, USA
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
The goal of our project is to create a program in C++ that could accurately determine the lines upon which a piece of material needs to be folded such that any
given straight-edged polygon can be excised with a single straight cut. We wanted to be able to fold and cut both convex and concave polygons of varying
complexity. To attain the goal stated above, we developed our own algorithm, loosely based on one developed by Erik Demaine. To our knowledge, parts of
our algorithm are new to the Computational Origami community. To date, our algorithm works only on approximately 80% of simple convex polygons. However,
in our opinion, this is a huge success. Our project is steeped in geometry, computer science, and origami. This unique mixture of fields has made the work on
our project very interesting and enlightening.
2004 - CS053
ARTIFICIAL VISUAL PERCEPTION: AN INTEGRATED APPROACH TO NEUROADAPTIVE MODELING
Kimberly Elise Reinhold
St. Joseph High School, Hilo, HI, U.S.
Complex computer models of the human visual system were developed in this project using artificial neurons as fundamental building blocks. Evolutionary
adaptation and neural network modification mechanisms were emulated with the design of an original paradigm for artificial visual perception. Techniques from
four fields of artificial intelligence (evolutionary computation, cellular automata, expert systems, and artificial neural networks) were incorporated into the
system’s multilayered and hierarchical architecture. <br><br>Two challenging and useful applications for hematopoietic disease diagnosis (cytology) and
salivary gland tumor classification (histology) were pursued. Features of input based on size, shape, color, and relative position were used by the networks to
distinguish images in the training and recognition modes. Specific attributes for evaluation were chosen by expert criteria, interactive selection, or evolutionary
algorithms that identified image translation protocols defined by cellular automata rules. The network learned to distinguish every training image in all
applications. Its accuracy (86-92%) and specificity (100%) for the classification of unfamiliar input were excellent. Use of the evolutionary computation approach
to attribute analysis provided a relatively autonomous method for visual perception. <br><br>Results from the paradigm developed in this study suggest its
general applicability to a wide range of image recognition tasks. <br><br>
Awards won at the 2004 ISEF
Award of $500 - American Association for Artificial Intelligence
Third Award of $300 - Association for Computing Machinery
Third award of $250 - GE
Tuition scholarship of $105,000 - Drexel University
Out of state tuition, approximately $5,000 per year for
four years - Oregon Institute of Technology
First Award of $700 - IEEE Computer Society
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
Special Achievement Award Certificate - Caltech JPL
First Award of $3,000 - U.S. Air Force
2004 - CS008
A COMPARISON OF COMPUTER BASED PSEUDO RANDOM NUMBER GENERATORS
Christopher Scott Rogers
Caddo Parish Magnet High School, Shreveport, LA , USA
I conducted this experiment to determine which pseudo random number generator (PRNG) function in the Hypertext Pre-Processor (PHP) language, rand() or
mt_rand() was better at producing random numbers. This experiment also tested whether I could develop a better PRNG in the PHP language.<br><br> I
coded the procedures into a PHP program that was comprised of three sections: generation, data collection, and analysis. <br><br> The generation section
included five PRNGs: 1) Linear - the rand() function; 2) Linear Advanced – ceil ( rand() * ( rand() / (rand () + rand() ) ) ); 3) Mersenne Twister - the mt_rand()
function; 4) 2D Matrix; 5) 3D Matrix. The 2D and 3D Matrices were both seeded by mt_rand() and used arrays of previously generated numbers with the
generation operation to produce new numbers. <br><br> The data collection section stored sets of 10,000 numbers per generator, per run. One hundred runs
were conducted for a total of 5 million numbers generated. <br><br> The analysis section first converted each number to 0 or 1 depending on whether the
number was even or odd. Next, the program ran three tests: the frequency test, the length of runs test, and the number of runs test. Graphs were then
produced to allow comparison of the results.<br><br> My conclusions: 1) Mt_rand() performed better than rand(). In fact, the rand() function failed completely.
2) I was able to develop a better PRNG using PHP. The matrix-type generators performed the best, with the 3D matrix holding a slight but insignificant
advantage.<br><br>
2004 - CS014
AN ALGORITHMIC APPROACH TO DNA-BASED DATA STORAGE AND ERROR CORRECTION
Ho Seung Ryu
Pacific Grove High School, Pacific Grove, CA, U.S.A.
This project proposes storing binary computer data in DNA. Despite the exponential increase of capacity in optical and magnetic storage, more efficient data
storage is needed, and DNA provides unparalleled data density. To guarantee that information stored in DNA can be properly retrieved when needed, error
correction mechanisms are developed in this study.<br><br> A simple code is devised to translate between binary information and DNA nucleotide sequences.
To address the errors that may occur in DNA, various error correction algorithms, known collectively as Constructive Restructuring for Recovery of Two-bit
Streams (CORRECTS), are defined. By incorporating repeating alignment bit patterns with bit sequences generated by linear feedback shift registers (LFSRs),
discrepancies between expected and observed bit sequences allow for the identification and correction of inversion, translocation, insertion, and deletion errors.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
The remaining errors are corrected by the intra-word integrity and the final recovery mechanisms. Computer simulations and analysis of DNA-specific errors
demonstrate that the errors that occur in DNA can be corrected by these processes.<br><br> This study extends data storage in DNA to include non-textual
information for the first time. CORRECTS, the first error correction method designed specifically for errors in DNA, ensures that data stored in DNA can be read
without error. As such, DNA can be used to safely store space-intensive information such as music and other multimedia in a tiny volume. Encoded DNA
fragments could be inserted into hardy bacterial cells to preserve human knowledge, even in adverse conditions, for millions of years.
Awards won at the 2004 ISEF
Paid Summer Internship - Agilent Technologies
Second Award of $500 - IEEE Computer Society
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2004 - CS028
LANCACHE - A DISTRIBUTED AND CO-OPERATIVE CLIENT SIDE WEB PROXY CACHING ARCHITECTURE AND PERFORMANCE
Yogesh V. Saletore
Olympia High School, Olympia, WA, USA
In this science project we present LANCache, a software prototype implementation and its performance to improve web access times. Currently, clients
independently cache web objects on their own hardware to speed up subsequent accesses for the same objects. Other clients on a Local Area Network (LAN)
must retrieve their own copies of the same objects from the Internet. Typically, caching servers are used to improve performance via object sharing, which
require additional hardware.<br><br> <br><br>LANCache is a distributed and cooperative Web caching architecture for client PCs connected via a typical
corporate LAN. The clients share a portion of their hard-disk and memory caches across the LAN as part of a distributed network cache. Peer clients request
and retrieve web objects from each other prior to accessing the Internet. A local proxy is used on each client to manage the requests. The software model that
is used to test the architecture uses Open Source software to connect clients, parse HTTP headers, and multithreading to concurrently process client requests.
<br><br> <br><br>Performance data was gathered for accessing large number of corporate Web pages, graphic images of varying sizes and other rich content
from a peer client on the LAN and the origin server on the Internet. Tests conducted show that LANCache architecture achieves 2-7 times faster access times
for Web pages, and is 5-25 times faster for large rich content compared to accessing from the Internet. Test results strongly indicate that LANCache
architecture is a highly efficient and a very cost-effective solution.
Awards won at the 2004 ISEF
Tuition scholarship of $105,000 - Drexel University
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
2004 - CS058
THE SMARTEST FRIDGE IN THE WORLD
Eric Christopher Sanchez
Porter High School, Brownsville TX, US
The reason I chose to do the experiment was to determine if a personal computer could operate more efficiently inside a compact refrigerator. I wanted to prove
this method as an alternative to water-cooling, as well as the conventional "Air/Heat-Sink” cooling method.<br><br>The basic steps I followed in the experiment
were to simply remove the internal components of a personal computer, and place them strategically inside a compact refrigerator. My data is represented in
"bar graph form," and displays the temperatures that the processor heat-sink received for 20 minutes. My conclusion resulted in my hypothesis being incorrect.
The computer failed after 23 minutes of use because of a small amount of condensation that built up around the processor. The processing unit was
permanently damaged, and ceased to operate.<br><br>Although the processor unit was destroyed, the computer was able to operate at a very stable
temperature before failure. The flaw in the experiment was a result of the condensation that formed around the processor. This problem could be easily
corrected by the use of an air dehumidifier. The dehumidifier could filter the moist air inside the refrigerator and cycle it more efficiently around the computer
components.<br><br>
2004 - CS052
DIGITAL DATA PIRACY PROTECTION
Kenny Sharma
Oak Grove High School, San Jose, CA, Santa Clara County
Overview - Digital Data Piracy Protection is a high level encryption system designed to prevent the piracy of music downloads, by effectively enforcing single
license distribution. It consists of four major components: Xeno (the encryption core system), FreeLancer (the secure transfer system), Omnicron (the server
side delivery system), and Infinitrix (the user interface).<br><br>How the Components Work<br><br> Xeno – is the encryption core – a unique encryption
system that encrypts using a method that creates files unique to each computer or other medium. This system ensures that the media will only work in a single
environment.<br><br> FreeLancer – the transfer system – works to securely transfer the information to allow Xeno to work remotely to the server. It allows
Xeno and Omnicron to work together seamlessly in a secure environment.<br><br> Omnicron – server side delivery system – work to properly encrypt the
media for each specific destination using the information derived by FreeLancer. It works in conjunction with the user interface Infinitrix to access user data and
deliver the media securely.<br><br> Infinitrix – the user interface – is what the user actually sees, allows the user to browse and purchase music, and then
delivers it to the user in its properly encrypted form.<br><br> Summary – the user initiates the download using Infinitrix, FreeLancer collects data and transmits,
Omnicron encrypts and returns via FreeLancer, and Infinitrix delivers it to the user.<br><br>Results - In over 750 unique attempts to date, the encryption core
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
has never been compromised by any amateur or commercially available program. The system works well within parameters. <br><br>
2004 - CS017
EXPLORATION AND TUNING TOOLS FOR IMAGE-BASED RENDERING ALGORITHM
Peter P. Sikachev
Lyceum of Information Technologies # 1533, Moscow, Russia
The aim of this project was to create software toolkit for exploring and tuning the characteristics of the novel image-based rendering algorithm, Light Field
Mapping (LFM). This algorithm was developed by R. Grzeszczuk (Intel Corp.) in 2002; it uses the concept of surface light fields. Light field is a 4-dimensional
function where two parameters specify the location of point on the surface of the object and others specify the azimuth and elevation of virtual camera. The
toolkit includes: algorithm animation manager, benchmarking tools for tuning the performance of LFM Viewer and 3D object loader for synthetic objects.<br>
<br>LFM was developed in order to retain maximum image quality. In fact, it overlays complex multi-variable lighting and shading algorithms upon textured
surfaces. This process can be hardware accelerated to achieve real-time performance through 3D hardware.<br><br>The algorithm animation manager helps
to visualize the key phases of LFM and the associated data structures. This component can be used by the graphics hardware designer during the initial
exploration of the details and interconnections between LFM features and concepts. <br><br>The benchmarking tools developed help to measure the
respective performance values (execution times) for several critical sections of LFM code in order to find the appropriate trade-offs in various LFM
implementations.<br><br>3D object loader converts VRML 2.0 object descriptions into OBJ format for LFM. Texture data can be added freely to the object
shapes so that synthetic objects from 3D modeling environment can be used in LFM tuning experiments on a par with 3D scanner data.<br><br>
2004 - CS059
COMPUTER HAPTICS: GIVING COMPUTERS THE SENSE OF TOUCH
Sonia Singhal
Carlmont High School, Belmont CA, USA
Purpose: The purpose of this experiment was to replicate the sense of touch using a computer.<br><br> Procedures: An apparatus was built that allowed a
pointer to be moved in space. Two small motors connected to the arms supporting the pointer applied forces to the pointer. Two accelerometers measured the
tilts of the arms, which a microcomputer used to calculate the pointer's position in space. The microcomputer also controlled the speed and direction of the
motors via a motor controller.<br><br> Computer programs were written to simulate four virtual objects or "phantoms." The programs turned the motors on and
off near the boundaries of the phantoms, thus resisting the motion of the pointer at those boundaries. This allowed the person moving the pointer to feel the
virtual shapes in space.<br><br> Conclusion: The computer programs were able to generate virtual objects whose shapes could be sensed by a person feeling
in space with a pointer. Although the boundaries were spongy, the shapes could be readily identified by the user.<br><br> The results of this experiment are
useful for applications where the sense of touch is important. Examples include microsurgery, remote sensing, and aids for the blind.
Awards won at the 2004 ISEF
Paid Summer Internship - Agilent Technologies
2004 - CS063
DEVELOPING THE INTELLECTUAL CAMPASITY OF THE HEARING - IMPAIRED: ORGANIZING INFRASTRUCTURE AND CIVIL SOCIETY
Arman Tazhenov, Zhibek Tazhenova
Lyceum by humanitarian College, Petropavlovsk, Kazakhstan
The goal of research work is to create conditions which could help disabled children to realize their potential in the modern communicative and informational
arena, participate in the business world, that is “undergo” socialization and individualization processes successfully, a necessary step for internal and external.
<br><br>This project represents the authors` first experience doing research work and using all opportunities available to demonstrate the intellectual capacity
of disabled children who have diminished hearing abilities made possible by infrastructure provided by the scientific and business Web-society.<br>
<br>Research materials are of great interest to government bodies and management working in the socio-cultural sphere, wanting to create programs for
children with low chairing levels. The practical significance of this research is that its result could be used to discover gifted children not only among children
who have problems with hearing, but also among other groups of disabled young people.<br><br>The results in the project can be used for all people, not only
those with hearing problems but also for children with motor –skill impairment. This is the perspective of the study.<br><br>
2004 - CS063
DEVELOPING THE INTELLECTUAL CAMPASITY OF HEARING –IMPAIRED: ORGANIZING INFRASTRUCTURE AND CIVIL SOCIETY
Arman Tazhenov
Lyceum by Humanitarian College, Petropavlovsk, Kazakhstan
The goal of research work is to create conditions which could help disabled children to realize their potential in the modern communicative and informational
arena, participate in the business world, that is “undergo” socialization and individualization processes successfully, a necessary step for internal and external.
<br><br>This project represents the authors` first experience doing research work and using all opportunities available to demonstrate the intellectual capacity
of disabled children who have diminished hearing abilities made possible by infrastructure provided by the scientific and business Web-society.<br>
<br>Research materials are of great interest to government bodies and management working in the socio-cultural sphere, wanting to create programs for
children with low chairing levels. The practical significance of this research is that its result could be used to discover gifted children not only among children
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
who have problems with hearing, but also among other groups of disabled young people.<br><br>The results in the project can be used for all people, not only
those with hearing problems but also for children with motor –skill impairment. This is the perspective of the study.<br><br>
2004 - CS301
PROJECT INTERMED: A NEW SOURCE
Brian Andrew Tessaro, Kobe Ticon Storay, Ty Anthony Wood
Springdale High School, Sprindale, Arkansas
This project is designed to be a medical data basing system to accelerate hospital admittance processing. Extensions from last year include a cross-referencing
system and a form designer to enable hospitals using the program to dynamically individualize the system to fit specific needs.<br><br>The first step in the
development was to completely rewrite year one’s program to create a faster, more reliable system. Because it was intended to run on desktops, laptops, and
Pocket PCs, the program had to be written in a cross-platform language. The solution was to use C#. Concepts from last year were used as the basis for this
year's project such as using hospital forms and incorporating their information into a database, so that their information could be searched. A communication
system was written to allow the program to efficiently communicate between both client and server programs. Once completed an encryption algorithm was
written to resolve security issues. A form designer for non-programmer creation of hospital forms that are instantly accessible for database procedures and for
hard-copy documentation was designed. A cross-referencing system was designed to enable medical professionals to research similar cases based upon
previous diagnosis and prognosis.<br><br>The resulting unique, data basing system meets all the original specifications. Extra features such as a crossreferencing system, a form designer, and a unique encryption algorithm serve to enhance the efficiency of the project’s goal. This system has the potential to
function as a streamlined medical database.
2004 - CS031
JOST OPERATING SYSTEM FOR THE 32-BIT 80386+ INTEL ARCHITECTURE MACHINE
John Seth Thielemann
Cumberland Valley High School, Mechanicsburg, PA, USA
This project was created to control the environment of an 80386+ 32-bit Intel Architecture Machine. The project operates in the protected mode operations
setting of the 80386+ IA Machine and supports Multitasking. The 2003-2004 project provides an updated Jost File System, direct modules for controlling certain
peripheral devices, and is capable of doing user functions. It is written in the 80386 Intel Architecture Assembly Language Instruction Set. <br><br> To support
easily understandable code, the JostOS(32b) Kernel is split into submodules, and those submodules are based on procedures that provide a hierarchical
structure from the low level port interfacing or searching to a higher level procedure that calls a group of lower level ones. <br><br> The Jost Operating System
for the 32-bit IA is split into two main parts, the bootstrap and the kernel. The bootstrap loads the Kernel into memory, sets up system tables, initializes a few
registers, and enters protected mode and transfers processor execution to the Kernel. The Kernel is broken down into submodules, eventually all the
submodules are grouped together to form a giant main source code for the Kernel. The kernel is split down into submodules that provide the functionality of the
Jost Operating System, including the Keyboard, Basic Input/Output, IDE Hard Disk Access, File System, Interrupt Service Routines, Multitasking Procedures,
the System Timer, and the Jost Operating System’s Shell. The Kernel then proceeds to initialize system tables, peripheral devices, and the JostOS(32B) Shell
application. After the initialization phase, the Kernel issues a task switch to the Shell application to handle the requests of a user. <br><br> The Operating
System is capable of performing all of the predefined goals initiated at the start of the project.<br><br>
2004 - CS009
DEMONSTRATING DISTRIBUTED COMPUTING
Zachary Joseph Tong
William Mason High School, Mason, Ohio, USA
Distributed computing is the usage of multiple host computers around the internet to solve a problem in parallel. It is a way to solve computationally intensive
problems quickly. This project’s goal was to demonstrate the speed and power of distributed computing. It did this by creating a client (programmed in Visual
Basic) that searched for emirps (a type of prime number). A server was programmed in PHP and was attached to a mySQL database. The client was
distributed to participants, who ran the client for about three weeks. At the end of the three-week period, 1,900,000 numbers had been searched and 505
emirps had been found. This was done 380 times faster than the control computer could have done it alone. In conclusion, the project was a success, as it
demonstrated the speed and power of distributed computing.
2004 - CS034
GAMMA RAYS FROM THE CRAB NEBULA USING AN OPTIMIZED CUTS
Emily Kate Toone
Bountiful High School, Bountiful UT, United States
Radiation from the Crab Nebula is detected when the direction of the "Pulsar Lighthouse Effect" allows for the electro-magnetic radiation to shine on the earth,
which happens 30 times a second, making it one of the strongest and most consistant benchmarks to use. Gamma rays from active areas of the universe are
detected on earth through when gamma radiation transfers it's energy to molecules in the atmosphere creating a rain shower effect. Gamma ray data collected
has to be selectively altered to establish the validity of the source. <br><br> The present industry standard for the data cuts is called Super Cuts 2000. Using
Super Cuts 2000 as the standard, the numbers for the 7 different variables were continually altered on the UNIX system unitl one specific optimal was found.
The process was completed using a different variable order until a specific pattern of cause and effect within the altered variables was defined. The numbers
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
were recorded then thrown out and the process was repeated. Several optimized cuts were established.<br><br>
2004 - CS308
HUMAN VENICE
Luis Eduardo Torres Duarte, Jose Ramon Diaz Navarrete, Rodrigo Velasco Velasco
C.B.T.i.s. No. 168, Aguascalientes, Aguascalientes, Mexico
The project "Human Venice" (in Spanish, "Venecia Humana") is a computer-assisted software that resulted from a proposal based upon human body studies,
specifically the nervous system. The name is derived from a comparison with the City of Venice, Italy, and its network of channels, similar to the nervous
system structure.<br><br>The goal is to develop a tool that supports students in the area of study solve the problem related to the access and understanding of
this kind of information while facilitating the learning process. The application of the tool includes educational programs and schools where the study of the
nervous system is applied.<br><br>"Human Venice" includes two blocks: the first one is dedicated to update the administrators of the program and personnel
already involved with the area of study incorporating, replacing and updating the anatomic sub-themes according to the constant scientific developments. The
second block is dedicated to support the learning process by itself and includes several applications such as search and selection of themes (presented in text
and multimedia), didactic autoevaluation processess and animated interactive exercises, among others.
Awards won at the 2004 ISEF
Second Award of $1,500 - Team Projects - Presented by Science News
2004 - CS060
F.I.N.D. DEVELOPMENT OF A FUNCTIONAL INEXPENSIVE NAVIGATIONAL AID AND DATABASE
Benjamin Thomas Unsworth
Kingswood Academy, Sulphur LA, USA
Shipping and dredging industries need accurate and up-to-date 3D maps of waterways to help create and maintain safe waterways. In 2004, Louisiana’s
dredging funds were cut by 40%, accelerating the need for even more accurate maps. The currently available navigational aids are inadequate, expensive, or
proprietary.<br><br> This project developed an inexpensive visual navigational computer aid (VNA) showing the current location of a surveying vessel on an
aerial photo. A database collects latitude, longitude, and depth along predefined survey lines. Civil engineers use the information from the database to create
3D maps of waterways for shipping and dredging industries.<br><br> Specifications were developed during meetings with civil engineers. The VNA helps the
surveyors accurately follow predefined lines. It shows the vessel as a fixed icon with the aerial photo scrolling and rotating around it. This gives the illusion the
vessel is moving. The VNA shows the boat’s direction. It also shows the direction to the start and end of the target lines. It retrieves the latitude, longitude, and
depth from the GPS and Fathometer through RS232 ports and saves the information.<br><br> The VNA charts the target lines to be surveyed; it displays and
exports the data collected.<br><br> A problem developed rotating the aerial photo. The rotating function resizes the aerial photo causing the pixels to alter their
position after each rotation. This results in the misplacement of the boat on the screen. The pixel changes were corrected using the Cartesian plane
formula(R^2=(A-Y)^2+(B-X)^2)in conjunction with arcsine.<br><br>
Awards won at the 2004 ISEF
Honorable Mention Award of $100 - U.S. Coast Guard
2004 - CS066
ISAN: A MULTI AGENT FRAMEWORK FOR SECURE, ANONYMOUS AND EFFICIENT INTERACTIONS IN DISTRIBUTED VIRTUAL ENVIRONEMENTS
Davide Venturelli
Liceo Scientifico "A.Tassoni", Modena, Italy
ISAN (Internet Secure Agent Network) is a study of a realisable peer-to-peer network architecture for the deployment of time-efficient Internet services requiring
communication protection and anonymity of both the user and the producer of exchanged information. <br><br>It achieves its goals through the employment of
mobile software agents that are unleashed by the users into the network in order to create stable Tunnels between nodes. If requested, Tunnels are
automatically arranged to create subspaces: virtual multi-user connection structures that can be anonymously accessed, and that can support services
administrated by the cooperation of many peers. Moreover, ISAN users can delegate tasks to agents allowing them to perform searches, discover services, and
remotely react in a pre-programmed way once triggered by certain events, even if their creator is temporarily offline.<br><br>The framework is designed to
employ the most advanced anonymity-oriented cryptographic protocols and it should efficiently resist all known eavesdropping and Denial of Service attacks,
through the application of original self-repair and flood-control procedures.<br><br>The author has also written a simulation in Python, which is publicly
available on the web, in order to test, in normal and extreme situations, the ideal performance of the system by checking its bandwidth usage, fault-tolerance,
and probability of success of common agent tasks. Other research and simulation results on similar networks have been examined and compared. <br>
<br>The results are encouraging: ISAN seems to be able to efficiently handle any service which doesn't require the transfer of large files.
Awards won at the 2004 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2004 - CS003
AUTONOMOUS OBSTACLE EVASION ROBOT
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Juan Carlos Villa
Southwestern Educational Society, Mayaguez, Puerto Rico
An even balance of sensors on a robot and an intelligent algorithm can yield utmost intelligence on the latter when running through an obstacle course. This
experiment carried out the construction of several algorithms to control movement and detection of obstacles. The algorithms were embedded in a robot and
programmed to interoperate. The experiment was carried out in four different types of obstacle courses. The objective of the robot was to pass the obstacle
course with the minimum amount of collisions and time elapsed. The sensor was mounted on the rotating base that gave the robot the ability to take
measurements at different angles. The robot’s application works by running a full half circle scan and passing it through the algorithms which took correct action
as to the movement of the robot. <br><br>The results of this experiment lead to various conclusions. The sonar sensor and infrared sensor has no relationship
to the distance from the obstacle. The observations of the experiment led to the conclusion that the navigational algorithm works better in obstacle courses with
only ninety degree angles rather than varying angles. In the non-ninety degree angle course the ultrasound sensor performed better. It was also observed that
the sonar sensor yielded more inaccurate results and outliers than the infrared sensor because of its spread beam pattern. For future experimentation the use
of a gyro to better detect rotation will be included for improved rotation.<br><br>
Awards won at the 2004 ISEF
90% paid tuition scholarship award - Instituto Tecnologico y de Estudios Superiores de Manterrey, Campus Guadalajara
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2004 - CS011
CAUTION: WATCH FOR FALLING OBJECTS! MODELING METEORS IN EARTH'S ATMOSPHERE
Dietrich J. Wambach
Guernsey-Sunrise High School, Guernsey, Wyoming
The purpose of this project is to create an application that will model a meteor entering the earth’s atmosphere. This program should answer questions such as:
Will the object land on the earth or burn up in the atmosphere? What will be the meteor’s maximum speed? What is the force of the impact on the earth’s
surface? What will be the size of the meteor, if and when it hits the ground?<br><br> <br><br>I used C++ to make a numerical modeling program, which
means that during a certain increment of time (delta t) the forces of the object such as velocity, acceleration, air resistance, and the altitude will be recalculated.
This process will be run over and over again until the object reaches the ground or burns up: The smaller the (delta t) increment, the more accurate the model.
<br><br> <br><br> The program will use equations for finding drag, air resistance, displacement, velocity, terminal velocity, change in temperature, and
accelerations. The resulting data from the program will reveal how a meteor is affected by its size, substance, air temperature, etc. as it is in free fall through
the atmosphere. It should also tell us the speed of the meteor, the meteor’s terminal velocity, and the drag behavior throughout the earth’s atmosphere.<br>
<br> <br><br>Every goal I set for this program was met; the program was a success at determining an accurate model of a meteor entering the earth’s
atmosphere.<br><br>
2004 - CS006
NEURAL NETWORKS AND SPECIATION GENETIC ALGORITHMS
Craig Andrew Wilson
Independent Austin, TX USA
This project studies the effect of introducing speciation to genetic algorithms (GA) that train feed forward, non-recurrent neural networks (ANN). Speciation
divides the population of the GA into species that are composed of ANNs with similar weight sets. Weight sets represent a way of solving a problem, so
different weight sets represent different ways. So what speciation does is divide ANNs into groups that represent a similar ways of solving a problem.
Speciation protects different ways of solving a problem by allowing each one to be optimized without direct competition. Each species runs its own GA on only
its members. <br><br> To test my speciation GA I compared it against a couple of other GA's, simulate annealing (SA), and backpropagation (BPN). The tests I
used included the XOR test, Mackey-Glass delayed difference equation, pole balancing, and two time series. In the Mackey-Glass test based on sum squared
error (SSE) BPN and the speciation GA performed the best. In pole balancing the speciation GA had an 80% success rate versus 0% for the normal GA. The
speciation GA proved to be the best on the two time series tests. Results were inconclusive on the XOR test.<br><br> This project has demonstrated the
viability of speciation GAs for training neural networks. It has also shown speciation GAs to be superior or equal in several situations to more classic training
algorithms. Further research in more types of problems in necessary to determine exactly how good speciation is.<br><br>
Awards won at the 2004 ISEF
Award of $500 - American Association for Artificial Intelligence
2004 - CS022
EVALUATING THE ABILITY OF A GENETIC ALGORITHM TO FIND AN OPTIMAL INVESTMENT PORTFOLIO
Jordan Strong Wilson
Math & Science High School at Clover Hill, Midlothian Virginia, USA
This research project evaluated the ability of a genetic algorithm to find an optimal investment portfolio. Genetic Algorithms (GAs) and Evolutionary Algorithms
(EAs) are computer programs based on the ideas of Darwin’s theory of evolution that have the potential to revolutionize how optimization problems are solved.
The purpose of this experiment was to see if one type of genetic algorithm, known as a differential evolution (DE) algorithm, could be applied to an investment
problem and whether it would outperform a “buy the market” strategy. It was hypothesized that the DE algorithm would outperform the “buy the market”
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
approach. The DE algorithm was given a matrix of data consisting of historical prices of the thirty stocks that comprise the Dow Jones Industrial Average. The
DE algorithm assembled a random set of portfolios as its initial population. This population of portfolios underwent hundreds of generations or trials, using
natural selection to eliminate and retain portfolios based on the specific criterion of total portfolio returns over a training period. This was designated as Period A
and spanned 1,313 trading days from December 9, 1993 through February 22, 1999. The hypothesis was tested in Period B, the daily closing prices from
February 23 1999 through November 29, 2003. Period B was the hypothetical “future” to test the competing portfolios. “Buying the market” produced Period B
return of –9.4% while the optimal DE portfolio provided a 10.4% gain. This result supports the hypothesis that a genetic algorithm could be successfully applied
to an investment problem.
Awards won at the 2004 ISEF
Award of $500 - American Association for Artificial Intelligence
2004 - CS020
FAULT-TOLERANT BEHAVIOR-BASED ROBOTS
Laura Anne Wong
Villa Victoria Academy, Ewing, NJ, USA
Use of robots in the real world requires robust systems that can handle diverse and changing environments. They must also contend with problems that occur
within the robot itself.<br><br> A behavior-based robot design can be made more resistant to failure through the use of meta-behaviors combined with a
hardware design that includes sensor and resources with overlapping capabilities. Meta-behaviors are used to handle fault conditions by changing the set of
active behaviors. It is easier to create and debug each set of behaviors versus creating one set of behaviors that handle fault conditions implicitly. The latter
tend to be more complex behaviors that are harder to debug.<br><br> The system developed using this approach includes robots that communicate using an
infrared beacon and radio. They use video and touch sensors for obstacle avoidance. These devices provide additional overlapping capabilities. For example,
the beacon and video support can be used to determine the relative position of another robot.<br><br> This design approach extends to the robot swarm.
Robots cooperate using meta-behaviors to communicate error conditions and support information. In this case, a robot’s video sensor can provide a nearby
robot with relative position information by detecting the nearby robot.<br><br>
Awards won at the 2004 ISEF
Award of $500 - American Association for Artificial Intelligence
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
One all expense paid trip to London International Youth Science Forum, $3,000 in savings bonds, $300 from the Association of the United States Army, a gold
medallion and Certificate of Achievement. - U.S. Army
UTC Stock - United Technologies Corporation
2004 - CS061
REAL-TIME CALCULATION OF THE ROCKET ROLL ATTITUDE USING THE ON-BOARD VIDEO CAMERA
Alexander Thomas Woods
Fairview High School, Boulder, Colorado, USA
The Laboratory of Atmospheric and Space Physics (LASP) annually flies a suborbital rocket to recalibrate a satellite instrument called the Solar Extremeultraviolet Experiment (SEE). The instruments onboard the rocket are spectrographs to measure the solar extreme ultraviolet spectrum and the Earth airglow
spectrum. The NASA Attitude Control System (ACS) points the rocket toward the sun with a precision of one arc-second, but the roll control is poor. The
airglow camera is onboard to measure the rocket roll attitude more accurately. Previous flights have used manual roll measurements of the Earth’s limb on a
video display. <br><br> This goal of this project is to automate the measurements of the roll attitude and make the roll measurements more accurate than the
previously used manual method. The project involved calibration of the camera angular resolution, alignment, and development of software to automate the roll
measurements. <br><br> This project demonstrated the feasibility of using real-time calculations of the rocket roll attitude using software to analyze images
from the onboard video camera. The software performed well on the launch of NASA rocket 36.205 on August 12, 2003 at the White Sands Missile Range in
New Mexico. The software measured an initial roll offset of 10 degrees, and three commands were sent to correct the rocket roll attitude. <br><br> The
software written during this project will be used in future flights of LASP’s suborbital sounding rockets to provide better control over the rocket roll attitude. This
technique could also be used for other rocket and satellite attitude control systems.<br><br>
Awards won at the 2004 ISEF
Honorable Mention Award of $250 - Eastman Kodak Company
2004 - CS303
ON ENUMERATIONS AND EXTREME VALUES OF ANTI-CHAINS, CHAINS, AND INDEPENDENT SETS OF BINARY TREES
Tsung-Yin Wu, Tsou, Cheng-Hsiao
Taipei Municipal Chien-Kuo Senior High School,Taipei, Taiwan
Binary Tree is one of the most important data structures in the computer science. The concept of Binary Tree can be applied to data sorting and searching. In
this project we study the statistics of Chains, Anti-Chains, and Independent Sets in a Binary Tree. The Chain can be used to sort/search a set of correlated
data; on the contrary Anti-Chain can be used to sort/search a set of independent data, while the Independent Set can be used to classify the data which are
interfering with each other. <br><br>In this project we propose algorithms to enumerate the number of Anti-Chains, Chains, and Independent Sets of a Binary
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Tree. Also we calculate the extreme values of these three statistics and discuss the extreme graphs when these extreme values are attained. We prove the
dual property of extreme graphs between Anti-Chain and Chain of a Binary Tree.<br><br>Finally we compare the relation among these three statistics of a
Binary Tree.<br><br>
2004 - CS306
RESOURCE ALLOCATION BY INTEGRATION OF DISPATCHING HEURISTCS WITH GENETIC ALGORITHMS
Yen Tung Yeh, Christina Kwong, Dean Thongkham
Westview High School, Avondale Arizona, United States of America
The process of semiconductor wafer fabrication is a complicated process involving hundreds of intricate steps. Due to this complexity, it is difficult for
manufacturers to meet customer due-dates. In fact, the scheduling of wafer fabrication facilities with the goal to minimize TWT is known to be an NP-hard
problem. In this study we consider parallel machine scheduling problems with job ready times and sequence dependent setups to minimize total weighted
tardiness (TWT, a measure that takes into account job-lateness and the job's importance). We aim to improve upon one of the best existing heuristics for the
problem (the ATCSR heuristic) by integrating ATCSR with Genetic Algorithms. Our hypothesis is that the integrated Genetic Algorithms-ATCSR scheduling
approach will outperform ATCSR alone. First, research was conducted on the different types of dispatching heuristics. Simulated in C++, we tested the parallel
machine environment to compare the ATCSR approach with the new hybrid Genetic Algorithms (GA) approach. We conducted simulations of 16 combinations
of 4 variables (setup severity factor, due date tightness, due date range, ready time tightness), each with a high and low value. Each of the 16 problem types
was simulated with 14 replications. Upon completion, a graph of the total weighted tardiness of the pure ATCSR and the hybrid GA and another of the ratios of
GA to ATCSR were constructed. Tables were also compiled to better represent the relationship between the high and low values of each variable. This analysis
demonstrated that the total weighted tardiness of the GA approach outperform that of the ACTSR by 9%.
Awards won at the 2004 ISEF
Award of $500 - American Association for Artificial Intelligence
Third Award of $1,000 - Team Projects - Presented by Science News
2004 - CS302
TOWER OF HANOI A LA KNUTH
Hsiang-Fu Yu, An-Sheng Li
Taipei Municipal Chien-Kuo Senior High School, Taipei , Taiwan
Once Knuth described the Tower of Hanoi as follows "You have 3 pegs, and you have disks of different sizes. You are supposed to transfer the disks from one
peg to another, and the disks have to be sorted on each peg so that the biggest is always on the bottom. You can move only one disk at a time." This project
investigates the query raised by John McCarthy: what if an order of disks on a peg is acceptable as long as the disk on the bottom is largest? <br><br> Our
study of the modified game (Tower of Hanoi a la Knuth) focuses on the following directions:<br><br>(1) Structure analysis: We analyze and enumerate the
sequences recording the disc moves in order to derive both the recursive and the non-recursive algorithms.<br><br>(2) Partition of N: We discover that the
sequence of moves forms a partition of the positive integers, and it has a wonderful congruence property.<br><br>(3) The ordering of proper fractions
associated with the Fibonacci numbers: The row/column of the partition table is exactly the ordering sorting the proper fractions associated with the Fibonacci
numbers of fixed denominator/numerator.<br><br>(4) The Restoration of an arbitrary initial state: We devise an efficient algorithm for restoring any initial state
of discs. <br><br> <br><br> This project offers a simple, neat, and new model in the theory of Algorithm, Number theory and Combinatorics.<br><br><br>
2004 - CS039
MULTI-LANGUAGE, CROSS-PLATFORM INTEGRATED DEVELOPMENT ENVIRONMENT
Yu Xuan Zhai
Jinling High School, Nanjing, Jiangsu Province, PRC
The primary purpose of the project is to build a highly integrated development environment (IDE), which can support both .NET and Java 2 platform, which can
support multi programming languages, which can develop programs running on multi operating systems or wireless devices. And it can also provide a
preferable development efficiency. So developers can be free to choose the languages they like, and only one IDE is enough.<br><br>C# language is used to
develop the project. I developed a technology called extensible language module. Different language modules are built based on the extensible language
module technology. They are connected to the IDE by XML (eXtensible Markup Language) to support multi programming languages and platforms.<br>
<br>The IDE can develop programs running on wireless devices (including PDAs, mobile phones) which support KVM (K Virtual Machine). It can support many
languages which contain C#, Visual Basic.NET, JScript.NET, Java and XML. It can also develop programs running on different operating systems such as
Windows, Solaris, UNIX, Linux and Mac OS X.<br><br>The project provides an easy-to-use highly integrated development environment by being fully objectorient, enabling users to write codes which are platform-independent and enabling users to create applications which run on wireless devices. Based on the
extensible language module technology, the IDE can support more new programming languages and platforms in the future.
2004 - CS038
REAL-TIME REMESHING WITH OPTIMALLY ADAPTING DOMAIN: A NEW SCHEME FOR VIEW-DEPENDENT CONTINUOUS LEVELS-OF-DETAIL MESH
RENDERING
Yuanchen Zhu
Shanghai Foreign Language School, Shanghai 200083, China
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
View-dependent continuous levels-of-detail mesh rendering is much studied in Computer Graphics. However, existing methods that support arbitrary meshes
suffer from either suboptimal rendering efficiency on modern graphics hardware due to irregular output meshes, or suboptimal fidelity due to inflexible sample
distribution in semi-regular remeshing. This project presents a novel solution to these problems.<br><br> I propose to use an adaptively refined progressive
mesh as the base domain of further regular refinements. Special data structures are designed to allow efficient mapping from the dynamic domain to the
original model during regular refinements. A new algorithm for frame-coherent view-dependent optimization of directed acyclic graphs based generic
multiresolution models is also devised. Utilizing inverse linearity of screen-space error w.r.t. distance, the algorithm schedules deferred evaluation and sorting
of nodes to achieve constrained optimality in delta-output sensitive runtime.<br><br> The dynamic domain provides sample distribution optimized for viewdependency and triangle budgets. The regular refinements ensure local regularity and allow the use of cache-aware vertex arrangements, reducing hardware
vertex cache miss-rate by a factor of 2 to 4. With the new algorithm, less than 5% of the domain and, subsequently, the output mesh is evaluated or modified in
typical frames.<br><br> The major contributions of this project are twofold: First, a multiresolution surface representation providing both optimized sample
distribution and hardware friendliness is introduced. Secondly, an efficient algorithm for view-dependent constrained optimization of generic multiresolution
models is presented. Combined, the two give rise to a powerful new view-dependent continuous levels-of-detail scheme that excels existing methods in both
approximation fidelity and rendering efficiency.
Awards won at the 2004 ISEF
First Award of $1,000 - Association for Computing Machinery
Second Award of $500 - IEEE Computer Society
A scholarship of $50,000. - Intel Foundation Young Scientist Award
Intel ISEF Best of Category Award of $5,000 for Top First Place Winner - Computer Science - Presented by Intel Foundation
The SIYSS is a multi-disciplinary seminar highlighting some of the most remarkable achievements by young scientists from around the world. The students
have the opportunity to visit scientific institutes, attend the Nobel lectures and press conferences, learn more about Sweden and experience the extravagance
of the Nobel festivities. - Seaborg SIYSS Award
Award of Merit of $250 - Society of Exploration Geophysicists
2005 - CS023
VIRTUAL STUDIO
Gyeong-Yun Bae
Changdong High School, Seoul, Republic of Korea
With the rapid improvement of computer hardware and the increased support of graphic libraries, the migration of the realities of life to virtual realities becomes
easier. Lots of software are developed for the modeling of geographical features and characters. Especially in editor software for geographical features, the
technique for the management of vertices is dramatically improved. So we can draw more realistic geographical features. But the limitation of such software is
that it cannot accurately describe complex objects. In this paper, we developed an editor for geographical editor that accurately draws complex objects such as
cliffs and canyons. Furthermore, in our tool, the virtual model changes according to the time flow. By using our tool, more realistic virtual model can be drawn.
2005 - CS047
THE ANITRA COMPUTER: A COMPLETE MINIMALIST COMPUTER SYSTEM DESIGNED, BUILT AND PROGRAMMED AT A LOW LEVEL OF
ABSTRACTION
Eirik Bakke
N/A
A classic exercise in computer science courses is programming hypothetical minimalist computers such as the Move Machine or the One Instruction Set
Computer (OISC). The goal of this project was to design, build and program a computer/central processing unit (CPU) from scratch using standard 74TTLseries compatible digital logic circuits, while investigating the minimum component usage needed to maintain a certain minimum functionality.<br><br> I first
defined design requirements and laid out possible CPU architectures. I created a minimized datapath unit, then specified an instruction execution sequence,
and designed an associated control unit. The resulting computer, called Anitra, supports 32 kilobytes of memory and two universal instructions: “move and
complement” and “add, complement and branch if carry”. For testing, I designed a development board and wrote a cross-assembler and an interactive
debugger/emulator, before simulating the circuits on CAD software and finally building a working hardware prototype.<br><br> Given my requirements, I have
shown it possible to construct a computer that comes close to a provable lower component limit, and my investigations suggests that a simpler datapath portion
of the CPU is unlikely to exist. The study of minimalist computers casts light upon issues in architecture design that may otherwise go unnoticed; one such
corollary is a better explanation of why the concept of a program counter is always present even in the simplest of practical computers; another is the
observation that even the OISC computer can be significantly simplified and even still in hardware terms provide another instruction for free.<br><br>
Awards won at the 2005 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2005 - CS311
COLLABORATING ROBOTS IN VIRTUAL ENVIRONMENT: MULTI-AGENT APPROACH
Vladimir A. Bazhin, Ivan S. Zakharov
Lyceum of Information Technologies # 1533, Moscow, Russia
The aim of the project was to create software toolkit for developing and testing behavioral algorithms to be used by programmable robots in virtual environment.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Such robots can be used in various operations (search and rescue, scene exploration, fire fighting operations etc). <br><br>The toolkit includes the following
components: <br><br>•server-side engine which handles physical properties of the virtual world and performs visualization;<br><br>•client-side interpreter
which executes robot’s behavioral program and sends commands to server.<br><br>Client-side program at the moment can use source files written in Lua
(www.lua.org) and Qscript (authors’ development) scripting languages. <br><br>SDK includes documentation on how to create new clients. <br><br>The
toolkit also includes:<br><br>•map editor for creating virtual environments;<br><br>•object editor for adding new objects that can be placed into the virtual
environment;<br><br>•robot editor for assembling robot model using a list of devices;<br><br>•device editor for creating new devices effecting robot behavior;
<br><br>•‘console’ client which provides control of client-server system.<br><br>The open distributed system approach is used. Server can receive commands
from clients that describe robot(s) actions via network. User can create new clients supporting various programming languages. One can even control his/her
robot from the keyboard. Visualization engine uses OpenGL. Network collaboration feature is built on TCP/IP using Berkley sockets.<br><br>
Awards won at the 2005 ISEF
Third Award of $1,000 - Team Projects - Presented by Ricoh
2005 - CS004
NOVEL INVESTIGATIONS FOR N-GRAM-BASED AUTOMATIC IDENTIFICATION OF WRITTEN LANGUAGE
Marek Blahus
Gymnazium Uherske Hradiste, Uherske Hradiste, Czech Republic
Automatic language identification is the important requisite often used in spell checking, machine translation and Web content filtering. In this project, N-grambased method is proposed for improved language identification. Furthermore, a novel computer program is designed to identify language of a given machinereadable text.<br><br> The program processes the given text. It searches the latter for all present groups of letters (size of one to three) in order to create the
set of possible outcomes with the related probabilities. Finally, based on the vector distance calculation, the closest language is determined by comparing this
set with built-in patterns for known languages.<br><br> Teaching module for recognizing new languages was also designed as a part of the program. When the
set of the probabilities of some particular language is compared with the others, the resulting response indicates a similarity with the known genetic language
classification.<br><br> I have tested the designed program in various Web applications such as machine translation or Web content filtering in the scope of the
semantic Web. Results which were achieved show on 85% language identification successfulness even for relatively short texts.
Awards won at the 2005 ISEF
Award of $500 - American Association for Artificial Intelligence
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2005 - CS040
THE PRISONER'S DILEMMA
Michael Dymkar Brandt
Lincoln Park High School, Chicago Illinois, United States
In the game theory branch of mathematics, there is a problem called the Prisoner’s Dilemma. Two prisoners (A and B) are unable to communicate, and each is
given the choice to turn in his partner (non-cooperative) or keep quiet (cooperative). If A and B cooperate, they both receive 1 year in prison. If neither
cooperates, they both receive 5. If A cooperates and B does not, A receives 20 and B receives 0. <br><br><br><br> PrisonerB<br><br> Not Cooperate
Cooperate<br><br>Prisoner A Not Cooperate 5,5 0,20<br><br> Cooperate 20,0 1,1<br><br> <br><br><br><br> The purpose of this project is to see what
strategy yields the shortest sentence when two prisoners play this game repeatedly. The computer programming language C++ is used to match up two
different strategies for 1,000,000 trials. The strategies range in levels of intricacy of artificial intelligence.<br><br> The general observed trend is that the more
uncooperative a strategy, the fewer years in prison it yields, while the more that a set of two prisoners cooperates, the fewer their combined total number of
years received. The prisoner’s dilemma problem is a model which can be expanded to many areas of study, including psychology, economics, evolution,
artificial intelligence, and political science, among others.
2005 - CS002
INTEGRATING NEURAL NETWORKS WITH PATTERN RECOGNITION TO PREDICT STOCK MARKET MOVEMENTS
Benjamin Louis Brinkopf
Canterbury School, Fort Myers, FL USA
Neural networks are becoming a chief way to answer problems with large sums of information. Neural networks derived their name from the brain’s neurons
that distinguish patterns and correlations among large data. <br><br> Artificial neural networks that conduct themselves much like the brain’s neurons have
been created for computers. With proper training, they can distinguish patterns from information that linear algorithms cannot.<br><br> Artificial neural
networks are mathematically driven models that connect the input nodes, output nodes, and hidden nodes, and use a backpropagation algorithm to manipulate
the hidden nodes values. They work best with much information derived from a chaotic manner to sort out patterns.<br><br> The purpose of my project was to
determine whether neural networks could accurately predict stock market movements. The results showed neural networks can forecast this data, and
depending on the variables, they can interpolate quite precisely.<br><br> I selected twenty random stocks and composites for my research and crude oil prices
from 1990 to present and 2003 to present, both using daily closing prices. I created, with the Neuralyst program, my own neural networks to analyze the market
indices, the DJIA and NASDAQ, to forecast the closing stock price for the following day.<br><br> Using the SPSS Statistical Analysis program, I analyzed the
results of my neural networks against the stock’s daily closing price, and all of the correlations were significant at the .01 after conducting a Pearson correlation
test.<br><br> These results indicate that market momentum can be forecasted and far more accurate future models can be predicted.<br><br>
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2005 - CS302
THE EFFECT OF NETWORK INTERRUPTION ON THE EFFICIENCY OF BUNDLING VS. TCP/IP DATA TRANSMISSION PROTOCOLS
Anna Chan, Daniel Caughran, Colin Applegate
Yorktown High School, Arlington Virginia, USA
The efficiency of the TCP/IP vs. bundling transmission protocols as network interruption increased was investigated. In the current TCP/IP Internet system,
communication is based on packet switching, where data is split into small packets that are individually sent to the destination. TCP/IP usability assumes a
continuous connection with minimal network delay to ensure data transmission efficiency. <br><br>An Interplanetary-Internet (IPN) is a "network of Internets"
that would potentially provide communication across interplanetary distances, enabling planetary communication or space commercialization, among other
applications. A challenge to IPN is the latency involved; interplanetary transmission delay may take hours. <br><br>A solution to the latency is delay-tolerant
networking (DTN) which has an extra bundle layer on top of existing TCP/IP. This allows all transmission information to be sent in one atomic package, which
would eliminate TCP/IP conversational protocols. Further, data may be stored by nodes indefinitely rather than by temporary storage of TCP/IP memory chips.
<br><br>A framework for testing Internet transmission vs. DTN over a simulated Interplanetary Internet was constructed using four computers and a signal
interruption device. Two file sizes (14 MB and 27 MB) were transferred as interruption time was varied from 0–240 seconds. It was hypothesized that as
network disruption increased, overall DTN transmission time would surpass that of TCP/IP. <br><br>Significant differences in transmission times were
observed (ANOVA, p<.05). As interruption time increased, TCP/IP transmission time increased quadratically, while DTN remained linear for both file sizes. For
future study, a mathematical models of delay can be simulated to investigate DTN vs. TCP/IP activity.
Awards won at the 2005 ISEF
Team Award of $1,000 for each member - U.S. Air Force
2005 - CS003
UDC - MULTIFUNCTIONAL CRYPTANALYTIC TOOL
Rostislav Igorevich Chutkov
Centre of mathematical education, Saint-Petersburg, Russia
UDC is a high-speed cryptanalytic tool, designed for calculation of hash-functions inversion. The high speed is achieved by distributing hash-functions
evaluation. Also cryptanalytic time-memory trade-off method was improved using distinguished points and fixed chain boundaries; the cryptanalytic part for the
same success probability is approximately six times faster than modern implementations. UDC has five modes of hash-value inversion methods (dictionary
attack, bruteforce, hybrid attack, correction of a typing mistake and correction of the check sum). UDC includes universal hash-library implementing the majority
of popular hash-functions. Each of 42 hash-functions from my library is faster than existing implementations. The functions are written in assembler and
optimized for Intel MMX instruction set.
Awards won at the 2005 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2005 - CS007
WEBENGLISH: A PHP AND MYSQL-BASED ENGLISH-TO-WEB APPLICATION CONVERTER
David Joseph Denton
Central High School. St. Joseph, Missouri 64501, United States
WebEnglish is an original online program that accurately converts English text to web applications. Several smaller programs were integrated in order for the
script to function as desired.<br><br>First, the program was interfaced aesthetically and structured for easy use. Using MySQL, the database was created with
a total of 14 tables and 50 fields. Four tools were developed to aid the user. The first created a customized form for the user, the second created a table, the
third referenced links, and the fourth referenced images.<br><br>The English commands code dissected a text string, split the string into words, checked for a
keyword match in the database, and (if a match existed) added the word to an array for database insertion. <br><br>After scripting was successful, a survey to
gather keyword data was administered. Appropriate keywords were inserted into the database. Each keyword was matched in the database with its
corresponding code.<br><br>Next, an artificial intelligence aspect was programmed. This aspect allowed commands with no found keywords to be associated
with current keyword commands, so that the program learned new keywords with every use.<br><br>50 individuals from Central High School were tested.
These subjects were asked to accomplish 28 different website objectives using WebEnglish. Each attribute had a different success rate; however none was
less than 88%. <br><br>Overall, the program was 90.4% accurate. The program’s source code included over 17,000 lines of original code. It was hypothesized
that the program would not be accurate if constructed. This hypothesis was rejected.<br><br>
2005 - CS012
A CORRELATION STUDY OF CPU FANS AND ITS EFFECT ON HEAT REMOVAL AS TASKS ARE PERFORMED
Micheal S. Eichelberger
Noxubee County High School; Macon, MS 39341; United States
This project’s purpose was to develop an awareness of computers and their relationship to temperature change in basic operations. The study used the
independent variable (CPU fan) with two standard barebones operating a motherboard with an Intel and AMD processor with a functioning Operating System
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
XP (or similar OS system: 98/2000/Me/Xp). After installing the operating system, connect to the Internet, download and then install a Hardware/CPU monitoring
tool software from any shareware provider (http://www.cnet.com, http://www.maximumpc.com/, or http://www.zdnet.com/). Upon installing the monitoring tools,
place the processors in as many tasks as possible (for example: gaming, movies, and photo editing, etc.). Record results and create a data table to show a
hard copy of all results collected by the monitoring tool software. Use two Shuttle-XPC barebones operating a motherboard supporting an Intel and the other
supporting an AMD processor. Repeat the procedure by removing the CPU fan and replacing it with the control (Shuttle’s ICE Engine heatpipe). The hypothesis
was incorrect. There were differences in task performance when the CPU fan was operating at various speeds. The CPU did not operate when the fan was not
operating. CPU worked better with heat pipe, lower temperature and created greater stability in the system CPU temperature.<br><br>
2005 - CS034
15-PASSENGER VANS: PREDICTING THE PROBABILITY AND TIMING OF ROLLOVERS USING ARTIFICIAL NEURAL NETWORKS
Nicholas Samir Ekladyous
Cranbrook Kingswood Upper School, Bloomfield Hills, Michigan, USA
The purpose of this project is to develop & test a system that predicts the probability & timing of impending vehicle rollovers in real-time. A system based on an
Artificial Neural Network (ANN) is used with a single Microelectromechanical (MEMS) sensor in several different dynamic environments. The ANN has the
advantage of speed and accuracy over traditional computational methods. A 1/11th scale model of a 15-passanger van is used. 15-passanger vans have the
highest rollover propensity of any passenger vehicle on the road. A suite of (8) dynamic road tests simulating tripped & un-tripped rollovers is used under a
variety of passenger loading conditions to train & test the ANN. A Roll Probability Index (RPI) is developed which uses sensor data combined with the ANN to
indicate rollover probability as a function of time. The RPI is function of: roll rate, lateral acceleration & roll angle at any given time along with critical values for
those variables. A Countdown-To-Roll (CTR) methodology is developed. The CTR is a function of the RPI, first derivative of the RPI and the RPI. The RPI, CTR
& ANN can be used to assess rollover threat and deplay vehicle rollover counter measures and safety systems in a timely & reliable manner. The system is
shown to be accurate & robust in predicting the probability & timing of impending rollovers in real-time and avoiding fasle responses.
Awards won at the 2005 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2005 - CS013
ANALYSIS OF DRIVER STRATEGY ON TRAFFIC FLOW
Megan Ruth Emmons
Del Norte High School, Del Norte Colorado, United States of America
This project involves development of a computer program to simulate a two-lane road with cars of different aggression levels. The program moves cars down
the road, allowing them to pass and change lanes. The cars are sorted into five groups. Group 1 cars travel 5 mph under the speed limit and have a 20%
chance of passing. Group 2 cars travel at the speed limit of 60 mph and pass 40% of the time and so on until we get up to Group 4 and Group 5 which both
travel at 10 mph over the speed limit. For each simulation, involving 500 trials, the program calculates the average speeds.<br><br>This multi-year project, in
its first year, is interesting because it reflects observed traffic situations. For example, Group 1 cars, which travel 5 mph under the speed limit, were unhindered
by an increase in car density. Group 2 cars were slightly slowed down by a higher density of cars because they often got "pinned" in behind Group 1 cars.
Group 4 and Group 5 cars were the most interesting. Both these groups traveled at 10 mph over the speed limit but Group 5 was more aggressice in passing.
These groups rarely traveled 10 mph over the speed limit though. This is also understandable because they were often trapped behind slower cars as well.
There was no difference between Group 4 and Group 5 cars. The program indicates aggression isn't beneficial beyond a certain level. Finally, all cars benefited
from having more aggressive drivers.
2005 - CS032
STATISTICAL ANALYSIS AND GRAPHING ENGINE
Grant Tyler England
Hart County High School; Munfordville, Kentucky; USA
This project originated as a simple idea--to create an original JAVA program to assist in calculating and analyzing (via graphs and calculations) statistical
values. I programmed the entirety of the program code, which is quite lengthy (approximately 75 pages) and SAGE arose from the seemingly interminable
amount of code. The Statistical Analysis and Graphing Engine, or SAGE, is a combination resulting from topics from two separate areas, computer science and
statistics. SAGE was created using JAVA (http://www.sun.com/software/java) and the BlueJ programming environment (http://www.bluej.org) to allow statistics
to be calculated more easily and efficiently and graphs to be produced accurately and intuitively. Creating SAGE required extensive knowledge of the JAVA
programming language and statistics, and it can be used to calculate many different statistics as well as to view many different types of statistical graphs
(including linear regression statistics and graphs, pie charts, histograms, and other statistics and graphs).
2005 - CS308
INFOLIBRAS
Juliano Firpo, Augusto Simon, Vinicius D'Agostin
Fundacao Liberato Salzano Vieira da Cunha, Novo Hamburgo - RS - Brasil
Visiting diverse deaf schools, particular and public, in Brazil, we perceive the lack of softwares of support that assists the professor in the education of the deaf
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
child, in view of the good infrastructure of the computer science laboratories. Initiated a research of field on the subject, he was determined to develop a
software that obtains to supply great part of this difficulty presenting something attractive and instructive for the deaf child. Always with the supervision of
specialists of the area. It is important to point out that our project consists of teaching the LIBRAS (Brazilian Language of Signals), that it is the language
mother of the deaf people and for them very valued. Differently of softwares found, it will be directed the children and it will focus the LIBRAS used in south of
Brazil, therefore as well as they exist accent regional, exist differences in the use of the LIBRAS in each region. Some criteria had been established for the
development of the work. Software will have to be of easy understanding. To possess a sufficiently attractive visual interface for children. To contain
demonstrative videos of the LIBRAS and to have two didactic games at the very least. Software, after concluded, was tested with diverse types of specialists in
the areas of education for deaf people, computer science directed to education and deaf pupils of three different schools special of the region of the great Porto
Alegre. The project had total in such a way acceptance for the part of professors how much of the children.
2005 - CS050
DECOMPILING JAVA PART II: CONTROL FLOW
David Lawrence Foster
Chamblee High School, Chamblee, Georgia, USA
The goal of this project is to demonstrate a method for analyzing patterns of control flow in program code. This analysis, called control flow analysis, would
allow groups of program instructions to be organized into a set of structured code blocks, which would exhibit only those control flow structures permitted in
Java source code. This research is highly applicable to the implementation of Java-language decompilers.<br><br> Independent research was performed,
using reasoning and experimentation, to discover various patterns of control flow and a set of techniques for expressing them in terms of structured
programming constructs. Various methods for control flow analysis were tested by integrating potential implementations of them into a decompiler program that
was developed during the course of last year's project.<br><br> As a result of this project's research, a procedure for control flow analysis was successfully
developed that supports the recognition of all Java-language control flow constructs, except for finally-blocks that are implemented using the opcodes JSR and
RET. The complete set of algorithms in the implemented decompiler, including control flow analysis, successfully processes 99.68% of the 6949 classes in the
Java 2 Platform Standard Edition v1.4.1, as of February 19, 2005. In none of the 22 unprocessable classes was the failure the fault of the control flow analysis
algorithm.<br><br> The analysis procedure detailed in this project is directly applicable for implementers of Java-language decompilers and obfuscators.
Additionally, this procedure can be adapted to suit the analysis of control flow in nearly any structured programming language.
2005 - CS027
A NEW APPROACH TO COMPUTER VIRUS RECOGNITION USING ARTIFICIAL NEURAL NETWORKS PHASE II: A NEW FEATURE EXTRACTION
SYSTEM FOR RECOGNIZING WINDOWS® VIRUSES
Benjamin Alan Frison
Carlisle High School, Carlisle, PA, USA
Computer viruses cause billions of dollars in destruction to information systems each year. The exponential growth in the number and complexity of viruses has
caused a need for antiviral systems that express innate immunity. Artificial neural networks (ANNs) are promising in this respect because of their generalization
capabilities. ANNs have successfully been used to recognize boot sector viruses; however, this year’s research expanded the capabilities of ANNs to recognize
windows viruses from benign programs.<br><br>Binary signatures are high-dimensional data. Thus, a feature extraction system was created to extract DLL
information from over 2,500 windows executables. Two monolithic ANNs were created, one recognizing viruses using DLL names as feature selection criteria,
and the other using DLL function calls as feature selection criteria. The first architecture achieved 78% overall accuracy, and a 53% detection rate of new,
unforeseen malicious mobile code. The second architecture had over 10 million neural connections, creating too much overhead. <br><br>Further research will
integrate both architectures and optimize the ANN using an evolutionary algorithm.<br><br>
2005 - CS053
REVERSE ENGINEERING THE MICROSOFT XBOX TO ENABLE LINUX BASED CLUSTERS: ANALYZING THE EFFICIENCY OF PROCESSING
DISTRIBUTION OVER AN XBOX CLUSTER
Bennet Grill
Rio Rancho High School, Rio Rancho, New Mexico, United States
The Microsoft Xbox is a computer- it contains components nearly identical to a desktop PC that are slightly modified to function in a gaming console. Through
hardware modification, the Xbox can be enabled to read Linux code (which supports clusters.) The Xbox has potential to be a viable and economical addition to
a cluster. The question lies in the efficiency of an Xbox in a cluster. Four Xboxes were modified in order to enable the running of Linux code. Distcc, a front-end
to the GNU compiler, was installed on four machines. The Xboxes were configured in a network by means of a router through their Ethernet ports. Distcc was
started and then told to compile the Gentoo distribution kernel that the Xboxes were running. The test was repeated with one Xbox and then four Xboxes;
Distcc provided benchmark times for the kernel to be compiled. The results evidenced a 240.78% processing power increase with a four node cluster over a
single node. Each Xbox added approximately 80.26% of its processing power, or about 588 Megahertz. The four node cluster essentially functioned as a 2.5
Gigahertz compiler. This efficiency could be utilized by many small businesses or developers. Also, programmers in underdeveloped countries could have an
affordable compiler by adding Xboxes to a cluster. This experiment provides insight into the dual use of an Xbox as a gaming console and a working Linux PC.
Future experiments could include non-compilation based processing as well as testing in conjunction with desktop computers.
Awards won at the 2005 ISEF
Tuition Scholarship of $105,000 - Drexel University
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2005 - CS307
ANALYSIS THE DEFECT OF CURRENT COMPUTER KEYBOARD AND NEW-TYPE KEYBOARD DESIGN
Zhenfeng Han, Wangyang Su, Yimo Gao
Attached Senior School of Shandong Normal Univ., Jinan, Shandong, CHINA
In the people's daily life and work, the computer is essential at present. The information is usually entered by the keyboard which is an important part in the
computer. However, the design of the keyboard exist problems in some aspects. To make the keyboard in the use is more comfortably and efficiently, some
aspects in the design are analyzed and improved, such as keyboard layout, key shape and function keys.<br><br> In China, the character is often handled with
input method of Chinese Pinyin. Though the letter conforms between Chinese Pinyin and English, there are many differences at usage frequency and
combination mode, so traditional keyboard layout (QWERTY) may be not fit for Chinese. We program the software to study the probability distribution of the
usage frequency of Chinese Pinyin. Based on the statistic result, a keyboard layout adapted Chinese Pinyin is designed by us. Input letter test show our design
has high speed than traditional one.<br><br> From the defect we find in the current keyboard designed according to the human body engineering, due to some
new element of key shape is added, a new kind of key shape is made combined the character existed keyboard. We summarized the usage conditions from the
current function key, and rearranged the function keys by the actual application efficiency. <br><br> Considering the ideas mentioned above in our project, a
new style of keyboard will be made to adapt the Chinese computer beginner. Adopting our new improvements, manufacturer will produce the products that the
users enjoy.<br><br>
2005 - CS006
WHAT FACTOR HAS THE MOST INFLUENCE ON THE SPEED OF A COMPUTER?
Bakari Benjamin Hill
Frederick Douglass High School, Atlanta, GA
To find out what factor has the most influence on the speed of a computer. I constructed two computers and mounted them to a plexiglas board. i installed and
operating system on each computer and i also installed the benchmarking software and ran the tests and observed the results.System 1v.1 CPU- AMD Duron
1.6 GHz 266FSB RAM-256 MB of DDR 333 PC2700 Operating System Load Time 45min System 1v.2<br><br>CPU- AMD Duron 1.6 GHz 266FSB RAM-128
MB of DDR 333 PC2700 System 2v.1<br><br>CPU- AMD Sempron 2400+ 1.67GHz 333FSB RAM- 128 MB of DDR 333 PC2700<br><br>Operating System
Load Time 38min System 2v.2 CPU- AMD Sempron 2400+ 1.67GHz 333FSB RAM- 256 MB of DDR 333 PC2700 Even though the Front side bus speed
seems to greatly influence the performance of a computer system it does not seem to be the most influential subsystem. A detailed examination of all
subsystems is required to determine what if any single subsystem has the greatest influence on the speed of the computer or is it the balanced interaction of all
the subsystems working together.
2005 - CS301
GENERATING 3D ANATOMICAL VIRTUAL MODELS FOR MEDICAL TRAINING SYSTEMS: PHASE III
Neha Rajendra Hippalgaonkar, Alexa Danielle Sider
Lake Highland Preparatory School, Orlando Florida, United States
Augmented Reality (AR) is an emerging technological paradigm that has the potential to change our world. AR allows us to place 3D/2D virtual images into the
real world at specific locations. A specific AR application developed using the Human Patient Simulator (HPS), is the Endotracheal Intubation medical training
tool. Wearing a Head Mounted Display (HMD), the user is able to see 3D images of the upper body’s internal anatomy superimposed on the HPS. <br><br>The
purpose of the present research was to apply and validate the methods previously proposed, which consisted of scaling a source mandible to a target
mandible. In year one and two, three human mandibles, all of the same gender, race and age group were digitized to form accurate 3D virtual models. A
specific set of physical landmarks were chosen. Out of these landmarks, 31 were selected for the scaling method. <br><br>This year we continued our work
precisely determining the optimal scaling factor as well as designing an appropriate validation methodology to statistically analyze the results of years one and
two. Using the newly determined scaling factor, results indicate that a 3D virtual mandible model can be made morphometrically equivalent to a real targetspecific mandible within a 1.85 millimeter average error bound. <br><br>Additionally, the virtual mandible has the potential to be useful as a reference target for
registering other anatomical models, such as the lungs, on the HPS. Such a registration use is made possible by physical constraints among the mandible and
the spinal column in the horizontal, normal rest position. Future research will further investigate registration accuracy using generated virtual models.<br><br>
<br><br>
Awards won at the 2005 ISEF
$2,000 Team Collaboration Scholarship Award, to be divided equally among team members. - Robert Luby, Jr. of IBM
Team Second Award of $400 for each team member - IEEE Computer Society
Fourth Award of $500 - Team Projects - Presented by Ricoh
2005 - CS030
PROJECT CHASM : CREATION OF A CHESS HEURISTIC / ALGORITHM STATE MACHINE
Paul Young Jarrett
Academic Magnet High School, North Charleston, SC, USA
Ideas about programming an artificial intelligence to play chess generally fall into the categories of attempting to evaluate every possible result, called brute
force, and the evaluation of only a limited number of moves, called selective generation. This thesis will blend the two types by making a finite state machine
(FSM) chess artificial intelligence, where each state contains a different behavior on how it will select a best move. The majority of states within the FSM will be
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
used to selectively generate moves according to the algorithms and heuristics within that state, while two other states will contain brute force algorithms. This
thesis will attempt to verify that selective generation can work well when backed up by brute force search techniques to find winning moves.
2005 - CS022
A TECHNIQUE FOR GROUPING CLIENTS BASED ON MACHINE LEARNING
Hyun-Ho Jung
Kwangnam High School, 554-11 Kwangjang-dong Kwangjin-gu Seoul, South Korea
In typical Client-Server system, all clients are directly connected to the server. In this situation, if the server tries to send a message to all clients, a delay in one
client can lower the performance of the whole system. Especially, the time interval between the first client which receives a message and the last client can be
increased. This problem can be solved by grouping the clients. In this paper, we propose a technique to group the clients to minimize the time interval based on
machine learning. From some experiments, we observed that the number of clients and the size of a message are factors which affect the time interval. The
rule for grouping clients is generated by machine learning considering the two factors. Using this rule we can effectively group the clients minimizing the time
interval.<br><br>
2005 - CS056
THE ALLOCATED SYSTEM OF AUTOMATIC CONTROL AND MODELLING OF TECHNICAL STRUCTURES
Alexey Korobitsyn
Specialized School in Mathematics, Physics and Informatics, KAZAKHSTAN
The allocated system of automatic control of engineering systems represents the multipoint network system of control of physical parameters and visualization
of technological processes. The system is based on a minimum of external devices and makes the control of physical parameters by means of standard sound
card of computer without using of industrial controllers. The impulse signal with variable frequency used in system is the entrance signal transmitting value of
controllable physical parameters from the gauge to a computer.<br><br>System includes means of visualization of technological processes and means of
design of three-dimensional dynamic model of controlled engineering system. The model displays changes of physical parameters of engineering system and
action of execution units. Three-dimensional visualization of manufacturing is a new direction of technology of automation and allows facilitate work with
complex engineering systems.<br><br>Control of such engineering system can be carried out by group of users of its different units. Users of system can work
with the general model of engineering system and they can be engaged in its codesign in a real-time operation mode.<br><br>The system of automatic control
does not require adjustment after its installment on a new computer that considerably simplifies its use. The area of application of this system is represented
rather wide due to its availability. The system is designed for the mass consumer and it can be used not only for the industrial and scientific purposes, but also
for a home use.<br><br>
2005 - CS046
THE BABELFISH SINGS VERSION 2.0
Nikhil Krishnaswamy
Piedra Vista High School, Farmington, New Mexico, USA
The purpose of this research is to determine the accuracy of a computer program that converts written text into spoken text and which of the test languages has
the most regular orthography.<br><br>Within the International Phonetic Alphabet, there are twelve or fewer subdivisions in the location and manner of
articulation of any sound, so a duodecimal system can transcribe every IPA symbol. Since there are twelve notes in an octave in the Western musical scale, a
system based on musical tones fits this requirement.<br><br>A computer program that converts written language into speech will be 90% accurate or better in
all languages tested. When pronunciation is converted to a normalized spelling, the result will be within 85% accuracy of the result of the original IPA
conversion and within 75% accuracy of the original text input. English will have the most exceptions in its orthography and Spanish will have the least.<br><br>
Spelling rules are contained in text files that contain information about the selected language, known as RULE files. Users can also create RULE files to load
into the program. The application includes functions to isolate letters in the input, check them against environments in the RULE file and convert them to values
assigned to sounds, functions to convert numbers between decimal and duodecimal, and functions to convert IPA symbols to speech.<br><br>The results
supported the hypotheses. English pronunciation was 90.4% accurate and the normalized spelling was 77.7% accurate. Spanish pronunciation was 100%
accurate and the normalized spelling was 94.5% accurate. All other test languages fell within this range.
2005 - CS044
SMART SOARING: FLOCKS OF AUTONOMOUS AIRPLANES
Elliot Robert Kroo
Henry M Gunn High School, Palo Alto CA, USA
Thermal updrafts are used by both birds and aircraft to improve flight performance. This project models the actions of a sailplane or soaring bird to enable an
autonomous aircraft to improve its performance using thermals. The project also shows how higher performance can be gained through collaboration with other
aircraft.<br><br> In the computer program used to model flight, all modules were developed by the author -- a three-dimensional flight simulation, thermal
model, control laws, genetic optimizer, and graphics package. The three dimensional simulation uses vector geometry to model an aircraft’s flight in a groundbased coordinate system. Object-oriented programming techniques were also implemented, allowing for multiple planes to be easily included in the same
simulation. The thermal model was based on a recent NASA paper, together with input from several sailplane pilots. Both single airplane and collaborative
control laws, involving communication among several aircraft, were developed and tested in the simulation, and control variables were optimized using a
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
genetic algorithm. The graphics package allowed solutions to be viewed in real time.<br><br> Significant performance improvements were attained by
optimizing both the single airplane and the collaborative control laws. Smaller aircraft were found to be able to take greater advantage of thermals and the
multiple airplane system demonstrated much better performance than the single airplane heuristic models.<br><br> The results suggest several exciting
applications in which energy in thermals might be used to dramatically increase the endurance of unmanned civil or military aircraft.<br><br>
Awards won at the 2005 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
Second Award of $1,500 - U.S. Air Force
2005 - CS057
STEREOGRAPHIC STEREOMETRY
Saken Kungozhin
The Speciallized School in Math, Physics and Informatics, Almaty, Kazakhstan
Nowadays computer-based 3D-imaging (volumetric, three-dimensional imaging) of objects is one of the actual problems as it is widely used in various branches
of science and technology (mechanical engineering, geodesy, architecture, medicine, cinema, photography, computer games, etc.).<br><br>The main purpose
of the present project is to deduce the mathematical formulas required for implementation of principles of the computer-based stereo imaging, and also to
demonstrate the application opportunities of the computer-based stereo graphics in educational process. The feature of the project is that it was carried out in
interconnection of the following subjects: mathematics, physics, and computer science. <br><br>This method is very convenient to use in educational
technologies at secondary schools, as it doesn’t need expensive technical devices.<br><br>The principles of realizing of such stereoscopic effect applying
glasses supplied with red-cyan filters are proposed and the mathematical formulas required for computer-based implementation of these principles are deduced
on the basis of analytical geometry. The program for stereo imaging of elementary geometric objects was developed on Turbo PASCAL algorithmic language.
<br><br>Computer-based stereo figures and stereo animations can be widely used as visual manuals during the lections on geometry, astronomy, chemistry.
The result of this project may also be useful for creating stereo figures and stereo animation during multimedia textbooks and manual design.<br><br>
2005 - CS029
AGE OF ACQUISITION IN FACIAL IDENTIFICATION: A CONNECTIONIST APPROACH
Brenden Manker Lake
Torrey Pines, San Diego CA, US
Age of Acquisition (AoA) is the phenomenon that acquiring a certain piece of information earlier than another results in a faster response time in adulthood.
AoA has been shown to have a significant role in a variety of human studies. Recently, it has been demonstrated that connectionist networks that abstractly
model reading and arbitrary mappings can also show AoA effects, and we extend this to facial identification. We present a connectionist model of facial
identification that demonstrates strong AoA effects by allowing faces to be acquired in their natural order and by staging face presentation. This extends
previous work by showing that a network that simply classifies its inputs also shows AoA effects. We examine which people’s faces are acquired earliest in the
natural model. We find that both “uniqueness” compared to the rest of the training set and “consistency” across multiple images predict AoA. We manipulate the
staged model in two ways, by either assuming outputs for the late set are trained to be off early in learning (or not), and by assuming the representation
developed for the early set is used for the late set (or not). In three of these cases, we find strong AoA effects, and in the fourth, we find AoA effects with a
recency control.
Awards won at the 2005 ISEF
Award of $500 - American Association for Artificial Intelligence
2005 - CS016
THE ROLE OF INITIAL CHROMOSOME SIZE IN A VARIABLE-LENGTH CROSSOVER AND ENCODING SCHEME IN A GENETIC ALGORITHM
Nathan Christopher LeClaire
duPont Manual High School, Louiville KY, United States
The genetic algorithm (GA) is a search algorithm that simulates Darwinian evolution in an attempt to generate optimal or near-optimal solutions to problems.
The traditional genetic algorithm represents solutions as fixed-length binary strings (called chromosomes), but there is a class of problems for which it is
necessary to allow the chromosomes to be of a variable-length. The genetic algorithm can be altered to allow this but a variable-length representation raises
several fields of inquiry, one of which concerns the effect of the initial length of the strings on the efficiency of the genetic algorithm. In this experiment several
initial chromosome lengths were tested in a genetic algorithm that utilizes a variable-length crossover and encoding scheme to generate approximate integer
solutions to equations. It was found that an overly long initial chromosome size is harmful to the efficiency of the genetic algorithm and slows the process.
2005 - CS306
MULTI-FREQUENCY OPTICAL DATA STORAGE SYSTEM
Rodrigo Luger, Giuliano Sider
Escola Americana de Campinas, Campinas, SP, BRAZIL
The main objective of this year’s project was to develop a data storage system more compact and efficient than those used in regular CDs and DVDs; this
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
system was designed to modulate data by placing translucent layers of different colors in the orifices of a storage disc. Depending on the color of each orifice in
the disc, a different value was stored. This project has developed both the multi-colored (or Multi-Frequency, as the project title so pompously hollers out) discs
and the automated reading device for interpreting the multi-colored orifices in the disc as actual data displayed on screen. <br><br>The innovative aspect of
this project lies in its unique use of different colors in order to modulate a number of different discrete possible data values in the disk. Whereas the previous
project made the use of different layers of semi-opaque monochrome material to modulate discrete data values on the disk, this project uses different colors
with an equal number of semi-opaque material. In other words, all orifices of the color-based discs are equally opaque, but they absorb different wavelengths of
light.<br><br>The project consists of the multi-colored (or Multi-Frequency) discs where the data is modulated, the diodes that sense the color incident upon
them across from the discs, an Analog to Digital converter that receives the voltage values from the diodes and transmits them to the computer via the serial
port, and the computer program that interprets the raw data inputted into the useful information originally modulated in the disc. The project has successfully
developed this working model of an innovative data storage system, and has the precedent for any further possibility of developing this type of data storage
medium.<br><br>
Awards won at the 2005 ISEF
Fourth Award of $500 - Team Projects - Presented by Ricoh
2005 - CS310
RCE – A SYSTEM FOR CRYPTED COMMUNICATIONS
ENRICO CESARE MAGGIONI, CLAUDIA MARIA COLCIAGO, MOTTADELLI RICCARDO
LICEO SCIENTIFICO E. MAJORANA, DESIO, ITALY
The aim of the project has been the development of a cryptographic system which combines a good standard of security with the possibility to be used by any
PC, not only by professional ones. The project implies the use of a key as long as the entire text to be crypted. In our personal project the key is generated
through five mathematical successions, managed by a common code and a personal code, and chosen on the basis of the frequency of the output values.
Every single character is converted in the Ascii Code and then crypted into another Ascii Code through the following modular function: an = (cn * bn) Mod 251
which is unidirectional thanks to the use of modular aritmethics, overcyphering and commutative. an is the output crypted value, cn is the Ascii Code of the
original character, bn is the value of the already mentioned succession, and 251 is the module value. The use of this modular algorithm allows a double
encryption which avoids the problem of the distribution of the key, because the sender (Alice) and the receiver (Bob) have different keys. Nevertheless we have
introduced the already mentioned common code, the same for Alice and Bob, which is calculated through Diffie-Hellman’s method in order to prevent the
interceptor from going back to the original message in case the interceptor replaced Bob. After a praticability test, developed through Microsoft Excel, the work
has been carried out in Microsoft Visual Basic, in order to create an efficient and easy to use software. This cryptographical system can be used to encrypt any
sort of files, even if only *.wav, *.bmp and *.txt crypted files can be opened by a program during the procedure. Through a capillar analysis of the frequency, the
program has been tested on the encryption of Alessandro Manzoni’s entire novel “I promessi sposi” and on other documents linked to this literary area, like an
image of the Como Lake (“Quel ramo del Lago di Como..” quoted by the Italian author).
2005 - CS019
BRAILLE ACCESSIBLE LEARNING SYSTEM A DISTANCE LEARNING APPROCACH
Ahmad Shakir Manshad
Las Cruces High School, Las Cruces New Mexico, United States
NGB (Next Generation Browsing) is a breakthrough in technology whose sole purpose is enhancing the accessibility of distance education for the visually
impaired by means of web-aided learning. A web browser that offers the broadest possible access for the next generation Internet user must be developed,
because online learning is fully dependent on Internet access. When this type of web browser is created, it will be programmed to seek and find any Access
Tag in a web page. When this occurs, the information contained within the Access Tag will be more accessible. Combined with this web browser will be a
Braille keyboard upgraded with 3 Braille codes for maximum accessibility. The process of testing NGB’s online learning efficiency was done by 10 blind
students. These students tested this system and filled out a survey; 100% was the average overall score of how much the blind students enjoyed using this
system. My hypothesis was supported as the increasing amount of blind students experienced a decrease of accessibility problems when utilizing my software;
this innovation has the power to bridge the educational gap between the blind and the sighted. Ultimately, I hope this system will reach all of the 180 million
blind and visually impaired people all over the world.
Awards won at the 2005 ISEF
Tuition Scholarship of $105,000 - Drexel University
Tuition Scholarship Award of $5,000 per year for 4 years for a total value of $20,000 - Indiana University
Tuition Scholarship of $5,000 per year for four years - Oregon State University
$3,000 First Place Scholarship Award in Computer Science - Robert Luby, Jr. of IBM
Second Award of $500 - IEEE Computer Society
2005 - CS054
THE USE OF DATA COMPRESSION TO DETERMINE THE SUBJECT OF AN UNKNOWN FILE FOR A DATABASE SEARCH ENGINE
Jacob Evan Meacham
Big Sky High School, Missoula, MT, U.S.A
For search engines to begin cataloguing scientific databases, they must be both accurate, efficient, and have the ability to look at the content of single papers.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
One of the ways to catalogue information accurately is through the use of data compression. Data compression is an algorithm that removes redundancies
within a document. Compression programs write their own dictionary while they compress files, and optimize the dictionary so that the data compression is as
efficient as possible. If another file, then, were compressed with a dictionary optimized for one file, it would follow that the more similar the two files were, the
lower the relative entropy they would have to each other. This project will use this principle of data compression to determine the subject of a document. 100
papers in six categories were selected. The 100 papers of each subject were concatenated and compressed, generating a subscript of the entire subject, Ai.
Then, papers of unknown subject, B were taken and concatenated to each of the Ai. The relative entropy of Ai was determined, finding the unknown text with
the smallest relative entropy to the test file. In nearly all of the tests run, it was possible to determine the subject of the unknown text with extreme accuracy. In
the alpha stage, the search engine appeared to function well. Qualitatively, the data compression search algorithm ranked information in a more useful way
than more conventional search techniques.
2005 - CS014
THE EFFECT OF RAM MEMORY ON THE OVERALL COMPUTER PERFORMANCE SPEED
Juan Carlos Meléndez-Otero
Colegio San José; San Juan, Puerto Rico
The purpose of this investigation was to determine the speed changes in a computer by using different RAM chips, informing computer users whether a RAM
upgrade could be a useful and cheap way to improve computer performance. The following RAM capacities were chosen: 1536MB, 1024MB and 512MB. Each
RAM capacity was inserted into the computer’s motherboard RAM card slot and the computer was booted, the first speed test was the time it took to load the
operative system, the second test was the time it took to load three different programs, and the final tests were the time it took to load three different internet
browsers. On the tests carried out on the operative system’s loading time, a constant loading time of forty eight to forty nine seconds was kept equal through all
of the RAM capacities; the standard deviation and range did not change with different RAMs. The tests carried out on the programs showed that the loading
speed of the programs do not get affected in any way by the RAM capacities; the standard deviation and range were erratic. On the tests with the internet site’s
loading speed, the more RAM that the computer had, the faster they loaded. With these results we can conclude that the operative system and low-resource
usage programs do not get affected in any way by the computer’s RAM and that high-resource usage programs, in this case the internet, do have a significant
improvement with higher RAM capacities.
2005 - CS038
I1N-KRI1P-SHE6N BI2 SI1L-E6-BE6LZ (ENCRYPTION BY SYLLABLES)
Jacob Donald Mitchell
Westfield High School, Westfield MA, USA
The purpose of this study was to determine if a syllabic cipher has a higher security efficiency (ratio of possible solutions to key size) than the traditional letterbased cipher when using the RSA encryption algorithm. English is an alphabetic language so there are a couple dozen letters which can be combined together
to create hundreds of unique syllables--this characteristic makes syllabic ciphers seem ideal.<br><br> The number of possible solutions is determined by
comparing the frequency of letters or syllables in the message to the standard frequency. The largest absolute frequency difference is the minimum variation
from the standard frequency required to find the correct solution to the cipher. Given the standard frequency and the variation from it, specific letters or syllables
may become interchangeable. The number of letters or syllables that each one is interchangeable with translates into permutations which can be used to
calculate the number of possible solutions.<br><br> Key sizes are calculated using the formula [b = log_2 x] where 'x' is the number of letters or syllables
needed and 'b' is the key size in bits.<br><br> The ratio of syllabic to alphabetic security efficiencies is 3.558E+911 to 1. Although this is a significantly larger
contrast than expected, it validates my hypothesis. Syllabic ciphers are somewhat less convenient to create; however, the security they provide is off the charts.
2005 - CS031
WRITING A MATHEMATICAL EXPRESSION COMPILER
Ramzi Sami Mukhar
Eleanor Roosevelt High School, Greenbelt, Maryland, USA
This project includes principles and techniques of compiler writing and the design and implementation of a mathematical expression compiler. <br><br>The
process of translating mathematical expressions involves most of the tasks necessary for building the core of a compiler. A mathematical expression compiler
evaluates mathematical expressions at runtime i.e. it takes in a complex expression written in a textual form, calculates such expressions and displays the
result.<br><br>An expression consists of numerical values (3, 67.9, 0.2), operations (*, /, +, -), and some symbols ((,)), and these are interspersed. The
compiler has to figure out which is the data and which is an operation.<br><br>This project will include:<br><br>- Compilers, Lexical Analysis and Syntax
Analysis <br><br>- Data structures and algorithms (e.g. stack)<br><br>- Prefix, Infix concepts<br><br>- Object Oriented Programming
2005 - CS017
DESIGNING A GENERAL PURPOSE EQUATION SOLVER USING FIELD PROGRAMMABLE GATE ARRAYS ON NASA'S HC-36 HYPERCOMPUTER
Vivek Mukherjee
Tabb High School, Yorktown VA, United States
The purpose of this project is to design a general-purpose equation solver for NASA’s HC-36 hypercomputer. Three programs were written to implement the
Gaussian Elimination algorithm to solve general matrices, including the near-singular Hilbert Matrix System. Two of these programs will be written in C++. The
first program generates Hilbert Matrices in a two dimensional array with a user designated right-hand side. The second program implements Gauss Elimination
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
to determine solutions for general matrices of the form Ax = b. This program also calculates the absolute error norm for each system size by taking the
magnitude of the residual errors (|b-Ax|). The third program implements Gauss Elimination in VIVA 2.4®, the hypercomputer programming language, to obtain
matrix solutions on the hypercomputer. The output of both Gauss Elimination algorithms will includes solutions and absolute error norms (residual magnitude).
For both algorithms, the behavior of the error norms was analyzed with respect to the system size using the Hilbert Matrix System.<br><br> For the C++
algorithm, plotting the error norm vs. system size produced an exponential increase in the error norm for 1<= Number of Equations (NEQ) <= 5. Upon
examining the NEQ between 5 and 100, the error asymptotically levels off and oscillates between 8.044e-30 and 2.67e-27. This proves that the C++ algorithm
is fairly accurate, but the lack of precision restricts it from solving the Hilbert Matrix past 5 equations. The hypercomputer algorithm successfully solved systems
up to a size of 8x8, but larger systems are currently being implemented. The absolute error norm increased exponentially over the NEQ domain of 8. Thus, the
hypercomputer produced the most accurate, reliable solution sets for the Hilbert Matrix System.
Awards won at the 2005 ISEF
$1,000 Honorable Mention Scholarship Award in Computer Science - Robert Luby, Jr. of IBM
2005 - CS058
USING THE SUBSET-SUM TO DEVELOP SECURE ENCRYPTION ALGORITHMS
Christopher Michael Mullins
Meeker High School, Meeker, Colorado, United States
Problem: How can a secure encryption algorithm be developed? Can the subset-sum algorithm be used to securely encrypt data?<br><br> Hypothesis: The
subset sum can provide ample security for an encryption algorithm.<br><br> Methods: Using Visual Basic, I developed an encryption program based on the
ideas of the subset-sum algorithm. The program generates a private key, from which public keys can be generated. It then encrypts data using the public key
and has the ability to decrypt the data with the correct private key.<br><br> Results: I tested the program using up to 300KB (over 310,000,000 characters) of
data. The program executed the encrypt and decrypt functions quickly and the decrypted strings came out flawlessly. I encountered problems with the storage
of the large integers generated by the encryption algorithm. I solved this by translating the large integer into its 16-bit integer representation, splitting the 16-bit
binary string into two 8-bit strings and storing them as their ASCII representations. In the future, I would like to add additional data to the keys for improved
encryption.<br><br> Conclusion: I have discovered that it is indeed possible for the subset-sum to securely encrypt data.
2005 - CS313
MINIMIZING TOTAL WEIGHTED TARDINESS ON A SINGLE MACHINE WITH JOB RELEASE DATES AND EQUAL PROCESSING TIMES
Vinayak Muralidhar, Andrew Gamalski
Corona del Sol High School; Hamilton High School, Chandler, Arizona, USA
The aim of this project is to study the general structure and to determine the complexity of the problem of minimizing total weighted tardiness (TWT) on a single
machine with job release dates and equal processing times (1|pj = p, rj|Sum{wjTj}).<br><br> Integer programs were formulated to mathematically describe the
problem, and software programs ran these formulations to produce optimal solutions to certain job instances. Although these programs were computationally
expensive to run, they provided a basis for comparison for the heuristics. Furthermore, theorems were developed based on observations about the problem.
<br><br> Well-known and original heuristics were applied to sets of jobs and evaluated. The known heuristics used were EDD (Earliest Due Date), GWF
(Greatest Weight First), FIFO (First In First Out), and ATC (Apparent Tardiness Cost). The original heuristics used include HWDD (Hybridized Weight-Due
Date) and Sum (Summation). The original Pairwise Swap Improvement Algorithm and Hungarian Improvement Algorithm were also applied to each heuristic.
<br><br> It was found that the Hungarian Algorithm-Improved HWDD (Hungarian HWDD) heuristic performed the best, reducing the TWT from the ATC
heuristic, originally the strongest heuristic for this problem, by 65.60%, and arriving within a mere 1.20% of the optimal solution, on average.<br><br> The
contributions of this project are twofold. First, integer program formulations and theorems have added knowledge about the structure of this problem. Second,
this project offers a variety of heuristics that run in polynomial time, and has developed the powerful Hungarian HWDD heuristic that far outperforms ATC and
comes very close to optimal solutions.
2005 - CS042
DEVELOPMENT OF AN IMPROVED COMPUTERIZED COLOR TRANSFER PROCESS
Andrew Ryan Nadlman
Blue Valley North High School, Overland Park KS, USA
During digital image processing, imposing the color and appearance of one image onto another is a routine process yet it requires a significant amount of time
and computer processing power to complete. This research project was focused on developing an improved color transfer method to perform color transfers
between images more efficiently and with comparable accuracy. One method of color transfer, demonstrated by researchers at the University of Utah, functions
by importing each pixel from transfer images into the computer and using statistical standardization to map one image’s look and feel onto that of another. The
University of Utah method, however, uses the computer to interact not only with individual pixels on the screen but also to store the color values in active
memory. The focus of this project is on enhancing the speed without compromising the accuracy of the color transfer through the integration of a process of
random sampling of image pixels. The random sampling of pixels yields many fewer pixels, yet by sampling appropriate proportions of the images involved; the
samples are representative of the entire image, and accomplish a color transfer with minimal error. <br><br> The data indicate that image processing time can
be significantly reduced by the incorporation of the proposed statistical and computational shortcuts into the process of color transfer between images without
compromising the quality of the color transfer. The time advantage of the enhanced method over the control method was found to be dependent upon the
number of pixels in the transfer images.<br><br>
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2005 - CS045
SOLUTION FOR LEVELS-OF-DETAIL MESH RENDERING THROUGH REMESHING
Muhammad Sadan Nasir
ADP IQRA, Quetta Balochistan, Pakistan
Rendering large and detailed models efficiently in real time is a common problem. In order to maintain a constant frame rate, it is necessary to use solutions
like view-dependent and continuous Levels-Of-Detail meshes. This research was an effort to find the new solution for Levels-Of-Detail mesh rendering
considering the advancements in the modern graphic cards. New algorithm for view-dependent continuous Levels-Of-Detail mesh rendering is presented as the
result of research. Different techniques have been analyzed through experiments for surface simplification, mesh decomposition and Levels-Of-Detail in the
process of algorithm formation. Firstly, mesh is refined using surface simplification method to obtain base domain and then this base domain is used to
measure scalar as well as vector attributes for generating Levels-Of-Detail for the original mesh. Polygon simplification process disturbs most of the triangle
strips which decreases the overall efficiency of the rendering. This Algorithm also manages the triangle strips during surface simplification to accelerate the
rendering process. Triangle clusters are processed instead of individual triangles to reduce CPU workload. Geo-morphing is supported through using the
capabilities of Vertex Shader which also increases the overall efficiency of algorithm. This algorithm can be implemented in real time simulations, game engines
and medical visualization system to increase the efficiency of rendering process and make the large datasets visualization possible.
2005 - CS035
"DARSHAN" COMMAND INTERPRETER FOR THE BLIND.
Catalina Sanchez Navarro
CBTis No. 139, San Francisco del Rincon, Guanajuato, Mexico
The blind people in Guanajuato hardly operate a computer with reading and writing techniques adapted to the Braille code effectively. The main goal of this
prototype is to attain that the blind operate a computer auto-sufficiently. The integration of a writing and reading method following the Braille code led to the
creattion of a new hardware and a new software denominated Darshan.<br><br>Following the prototype model, which incorporates some of the final product
components, the methodology was structured in three stages: the data input technique, an interface controlling the process, and the design of a tactile data
output device in an embossed Braille code character form. <br><br>Darshan is a command interpreter for the blind based on the Braille code. The input of data
is accomplished by adapting an abbreviated writing method to an standard keyboard. The KG is an electromagnetic mouse-likedevice built with a series of
bobbins and awls, arranged to make the tangible data output possible; it is connected to the CPU thru the parallel port. The commands generated with this
method allows the blind to operate a variety of applications such as editing, searching, and reading text files.<br><br>Darshan incorporates an abbreviated
writing method, a commands interpreter, and a hardware device. All of these components make possible that the blind operate a computer independently.
Therefore, they will have better and grater opportunities towards knowledge, education, and communication.
Awards won at the 2005 ISEF
100% paid tuition scholarship award. - Instituto Tecnologico y de Estudios Superiores de Monterrey, Campus Guadalajara
UTC Stock with an approximate value of $2000. - United Technologies Corporation
2005 - CS052
COLLABORATIVE NURSE SCHEDULING: A NOVEL APPROACH TO IMPROVE UTILIZATION OF EXISTING NURSE RESOURCES THROUGH SECURE
DATA TRANSACTIONS AND HOSPITAL CLUSTERING
Sonia Nijhawan
duPont Manual Magnet High School, Louisville KY, USA
In light of the growing nurse shortage that has affected hospitals all over the United States, a collaborative nurse scheduling system was created that connects
hospital requirements with registered nurse (RN) requests. After the prototype of the system was developed using Active Server Pages, HTML, JScript,
VBScript, and the SQL Server, several hospitals reviewed the scheduler. It was found that each hospital would be sending large amounts of hospital request
transactions per hour. Therefore, several transfer protocols were added to the system to allow for the transfer of large sets of data and a unique XML protocol
(named nXML) that defined hospital requests and scheduler response transactions was further developed and implemented. As part of collaboration amongst
hospitals, a plan was devised to incorporate “hospital clusters.” These hospital clusters consist of internal scheduling (within one hospital), an administrative
cluster (among branches of the same hospital), a physical cluster (among hospitals located in the same area), and a general cluster (on the
national/international level). Security measures were added, including the implementation of firewall, Role-Based Security of the SQL server, Internet Protocol
Security (IPSEC), SSL, user authentication through digital certificates, and Public Key Infrastructure with use of public/private key cryptography. After the
completion and improvement of the collaborative nursing scheduler, more data was accumulated from local Louisville hospitals. The data supported previous
results that the scheduler helps to minimize the gap between nursing supply and hospital demand.
Awards won at the 2005 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
$1,500 Third Place Scholarship Award in Computer Science - Robert Luby, Jr. of IBM
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
2005 - CS305
HIBRIDGE: BRIDGING MULTIPLE HIERARCHIES THROUGH STRUCTURED XML MESSAGES
Petko Nikolov, Michael Borohovski, Sean Cleary
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Stuyvesant High School, New York City, NY, United States
The concept of HiBridge arose from the need to keep track of related events from different data sources and the current states of those events. HiBridge has
two main functions: to act as a bridge, linking together different informational hierarchies, and as a dependency<br><br>manager. Much of the data contained
within these hierarchies requires certain information from other hierarchies before the former is evaluated. Thus, HiBridge was developed as a bridge for the
transportation of messages between the different hierarchies.<br><br>We wrote two different versions of HiBridge, and the final one implements the full
functionality for which we originally strived.
2005 - CS015
IMPROVING EEG SIGNAL RECOGNITION FOR HUMAN/MACHINE CONTROL
Farre Nixon
Pleasant Grove High School, Texarkana, Texas, USA
Scientists conducting research often have to perform many experiments without making physical contact with harmful or unknown chemical compounds. The
purpose of this project is to design a program that would assist researchers in their comprehension of electroencephalogram (EEG) brain signal fundamentals,
so that manipulation of the EEG to control mechanical devices can be conducted. <br><br> Taking data from cortical probe implants, both the spike train and
EEG were simultaneously recorded. A program called getToneData was created that analyzed the data received from the monitor. Designing the getToneData
Program required a basic knowledge of the C++ and MATLAB programming languages. Using the information acquired from the tests via the cortical probes,
numerical values were given to action potentials by measuring how many fired per second. This information was labeled a "tonearray" file, and each file
consisted of thousands of action potential readings. The basic components of the program consisted of finding the mean and range of the spikebins around
each tone, accomplished by using vectors and matrices. By adding the vectors vertically, the peri-stimulus time histogram (PSTH) was created. The histogram
was used to compile the data so that researchers then could compare the amplitude and frequency of the action potential in the range around the tones given
off by the computer. <br><br>
Awards won at the 2005 ISEF
All expense-paid trip to attend the U.S. Space Camp in Huntsville, Alabama and a certificate - National Aeronautics and Space Administration
Award of $3,000 in savings bonds, a certificate of achievement and a gold medalion - U.S. Army
Tuition Scholarship Award in the amount of $8,000 - Office of Naval Research on behalf of the U.S. Navy and Marine Corps.
2005 - CS026
INVESTIGATION OF POST-OPERATIVE GAIT DYNAMICS IN FEMORO-TIBIAL SURGERY PATIENTS USING THREE DIMENSIONAL MODELING
SOFTWARE
Jay Kiran Parekh
Coral Reef Senior High, Miami, Florida, United States
The performance of the femoro-tibial joint may be severely compromised due to the presence of certain conditions. These include bow-legged or knock-kneed
gait stances, improperly sized femoral and tibial replacement components, and improperly placed torn ligament grafts. The dynamics of the knee, including
flexion capabilities, cartilage sustainability, and knee stability, are some properties affected by these detrimental orthopedic circumstances.<br><br>The
purpose of this investigation is to graphically represent the effects of four femoro-tibial operations on the gait of patients. The four orthopedic procedures
include anterior cruciate ligament (ACL) reconstruction, high tibial osteotomy, unicondylar knee replacement, and total knee replacement. The animations were
constructed using the Software for Interactive Musculoskeletal Modeling. By augmenting data of the body’s joint angles and various force readings, the
software forms an animated skeletal model. <br><br>The results demonstrate that post-surgical knee dynamics can be represented. For the ACL
reconstruction, the increase or decrease in flexion or the relative stability of the tibia with respect to the femur was shown. The changes in force magnitude and
orientation were graphically described in the high tibial osteotomy animations. For the unicondylar knee replacement, the increase in the ground reaction forces
on either the medial or lateral sides of the tibial head were shown. The loss in flexion angle was shown through crouching motions in the total knee replacement
models. Overall, the use of 3-D modeling software provided accurate, qualitative animations of the post-surgical gait of patients. Future modeling may provide
for more quantitative representation, promoting further analysis.
2005 - CS008
SCREEN NAVIGATION FOR THE VISUALLY IMPAIRED
Daniel Jonathan Parrott
Bartlesville High School, Bartlesville, Oklahoma, United States
Over a million visually impaired youths and adults use computers on a regular basis. The need to advance and improve upon technologies that help the visually
impaired work with computers is consequently growing as well. Currently, there are few, if any tools available today that tie together screen interpretation along
with its navigation. A practical solution to this is a navigation program, written in the C++ language, which utilizes system hooks and text-to-speech.<br><br>
System hooks provide a means to intercept input messages sent to the operating system. By capturing these messages and saving them to a file, they can
later be played back as a special type of macro. Further, the navigation program allows the end user to reference each file through a hot key; a task that took
many steps to execute now just takes one.<br><br> The navigation program also provides a 4Kbyte data buffer to store key input from the user. This works in
conjunction with the text-to-speech engine to provide the user a means to proofread documents and to also validate character strings, such as passwords. The
program registers the arrow keys as hot keys to the system so that the user can more easily scroll through the data buffer.<br><br> This proofreading
mechanism, alongside the program’s navigation mechanism, helps to greatly reduce eyestrain for the visually impaired and to increase productivity – the main
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
goal that this project has in mind.<br><br>
2005 - CS314
COMBINING HANDWRITING RECOGNITION METHODS FOR INCREASED ACCURACY
Alan Garrett Pierce, Nate John Broussard
School of Science and Technology, Beaverton, Oregon, USA
Our goal was to design a computer program that could accurately read handwriting and text. First, it takes an image of handwriting or text and a text file with
the same content. It uses these two to increase its knowledge of the person’s handwriting or the font. This step is called incorporation. Then, it takes an image
without a corresponding text file and turns that image into a text file. This step is called reading. The main practical application of this program is that it can be
used to easily convert a handwritten paper or a book into text.<br><br>In order to both incorporate and read, the program has to divide the image into letters.
This is called blocking. It does so by dividing it into lines, then words, then letters. Once the image is blocked, the program will have a list of images. It then
either incorporates or reads the images using several different methods. These methods all find out different information about the letter. When incorporating,
the relative accuracy of the methods is calculated. This information is used later to make reading more accurate.<br><br>Our program works almost 100% of
the time on most typed fonts where the letters do not overlap. When we tried our program on handwriting it was able to recognize the correct letter most of the
time, as long as it was able to correctly block the image. However, there were still some errors when reading handwriting.
2005 - CS041
NEW BINARIZATION TECHNIQUE FOR DIGITAL CAMERA CAPTURED IMAGE BASED OCR
Nat Piyapramote
Sarasit Phithayalai School, Ratchaburi, THAILAND
Optical Character Recognition (OCR) is a well-known technology for transforming document image into editable text file. The system normally uses a scanner
as an input device. Nowadays, as digital camera becomes a common device for capturing image in digital format with sufficient quality; it can be used as an
alternative input device. However, using digital camera as an input device, the light variation problem needs more effective binarization process of OCR. <br>
<br> I propose the binarization technique that apply k-mean thresholding method based on a crystallize image area, crystallization-thresholding. This technique
was compared to several other methods e.g. k-mean global thresholding, k-mean local thresholding, and binarization techniques used in commercial OCR. The
tested images were captured by the digital camera at 120 dpi with size of 1600x1200 pixels using macro-mode. <br><br> The proposed binarization method
allows improving pre-processing module of OCR system e.g. the deskewing process. Experimental results show that this allows decreasing the error rates of
recognition.<br><br>
2005 - CS001
EXAMINING THE SPECIFICS OF HIGH PERFORMANCE COMPUTER CLUSTERS UNDER DIFFERENT OPERATING SYSTEMS: A SECOND YEAR
STUDY
Andrew Williams Powell
Trinity Catholic High School, Ocala, Florida, United States
<br><br>In the previous study accomplished last year it was hypothesized that generic personal computers running Linux software could be assembled and
linked together in a cluster to greatly enhance the overall processing speed. This hypothesis was proven correct. This year it was further hypothesized that this
could be accomplished using the Windows operating system, and that it was possible to program the master to partake in the computations as well.<br><br>
The original computers used last year were reused. Windows 2000 Advanced Server was chosen as the operating system and the appropriate clustering
software was obtained via the internet. The software was installed and the computers clustered over the network.<br><br>To test the cluster’s performance
the POVray benchmarking software was run on the system first with the Master, followed by the Master and Node 1, and lastly all three computers. It was found
that with just the Master it took 7 minutes and 13.8 seconds to complete the processor intensive program. When the Master and Node 1 were connected the
total time was reduced to 3 minutes and 42 seconds. When Node 2 was added (3 computers), the processing time was 2 minutes and 48.96 seconds. <br>
<br>Based on the results it was determined that the hypothesis was upheld. Thus, it is possible to cluster using Windows software, and the performance was
superior to the Linux cluster. The master can also aid in the computations. This technology can be used to create clusters with speeds rivaling that of more
expensive mainframe machines. <br><br>
2005 - CS010
LEONAR3DO 3D FOR ALL!
Dániel Rátai
John von Neumann Computer Science High School Budapest, Hungary
The personal computer in today's world is two dimensional. However, when we think of tomorrow's computers, fewer and fewer people imagine it to be in a twodimensional form. Sci-fi films represent the futuristic computer models as three-dimensional. The current three-dimensional techniques and designs are,
however, lacking in the functions needed or are too expensive. This is the reason why the larger audience - target or otherwise - know so little about these
"developments". With the creation of Leonar3Do, my goal was to step beyond current boundaries and develop a technique which is at last capable of providing
the general public with a three-dimensional PC. Leonar3Do performs spatial positioning based on the 3D reconstruction of stereo pictures. The two pictures are
produced by the most inexpensive webcams on the market. Due to a special method, the system has 1/6-1/8 mm sensitivity in the case of a 17'' screen, even in
a light room. To achieve the three-dimensional effect, Leonar3Do uses shutter glasses. With the webcams the light sources' position can be followed, so
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Leonar3Do can follow the position of the glasses, and simultaneously the users' eyes also. Thus the stereoscopic picture on the screen can be set to the right
eye position, with that terminating the distortions, and the illusion may have excellent proportions. In space Leonar3Do can make operations with a special pen
which has a light source in the place of the nib. One sits in the front of the computer's screen, wearing the glasses with the pen in his hand. The webcams are
fixed to two stands in a way that they barely see the screen, have all of the light sources in the field of vision, but do not see each other. Thus the computer can
define the position of the glasses and the pen in comparison to the screen. Expected areas of use: engineering, art, animation, movies, three-dimensional
scanning, medical use, education, developing skills and abilities, games, game developing, extra operation area, touch screen.
Awards won at the 2005 ISEF
First Award of $700 - IEEE Computer Society
Award of $5,000 - Intel Foundation Achievement Awards
Intel ISEF Best of Category Award of $5,000 for Top First Place Winner - Computer Science - Presented by Intel Foundation
The SIYSS is a multi-disciplinary seminar highlighting some of the most remarkable achievements by young scientists from around the world. The students
have the opportunity to visit scientific institutes, attend the Nobel lectures and press conferences, learn more about Sweden and experience the extravagance
of the Nobel festivities. - Seaborg SIYSS Award
First Award of $200 - Patent and Trademark Office Society
2005 - CS049
ARTIFICIAL COGNITION AND MEMORY: TISSUE IMAGE ANALYSIS FOR TUMOR DIAGNOSIS
Kimberly Elise Reinhold
Saint Joseph's High School, Hilo, Hawaii, U.S.A.
Tumor diagnosis is a critical medical procedure requiring high-level cognitive functions of perception and memory. An application for autonomous tissue image
analysis with complex pattern recognition abilities and powerful learning algorithms was pursued. To create such a system, novel procedures based on the
neurobiology of human cognition were developed. A model of the visual pathway containing over half a million virtual neurons was designed. It used original
artificial neural networks for color processing and abstract pattern evaluation as well as simulations of memory consolidation mechanisms of the brain.<br><br>
Two applications were chosen to demonstrate the artificial perception and artificial memory capabilities of the model. Brain tumor identification relied on color
delineation and color quantification techniques. Results were excellent (93% accuracy). The even more challenging job of salivary gland histopathology
demanded additional discriminants (attribute evaluation procedures) for tissue architecture (relative positions of nuclei). <br><br> The method devised to
generate such tissue architecture discriminants was called memory-based pattern matching. It employed short- and long-term memory techniques to identify
characteristic components of each image category by searching for repeating nuclear patterns. The artificial visual system then used these nuclear patterns to
distinguish the tumor types. Training was efficient, and classification of unfamiliar input was accurate (85% on average). <br><br> The autonomous biological
modeling paradigm invented in this project is highly successful at tumor diagnosis in multicellular tissues and has validity for a wide range of other difficult
pattern recognition problems.
Awards won at the 2005 ISEF
Award of $500 - American Association for Artificial Intelligence
First Award of $1,000 - Association for Computing Machinery
$2,500 Second Place Scholarship Award in Computer Science - Robert Luby, Jr. of IBM
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
First Award of $3,000 - U.S. Air Force
2005 - CS048
DEVELOPMENT OF CD-ROM-BASED PC CLUSTER SYSTEM FOR MPI PARALLEL PROCESSING
Haruo Sakuda
Hiroshima Prefecture Hiroshima Kokutaiji Senior High School, Hiroshima, Japan
Parallel processing improves the performance of computer systems by dividing a given task into smaller subtasks and simultaneously executing those subtasks
on several computers called processors.<br><br>A typical library specification for parallel processing is MPI (Message Passing Interface) is. <br><br>Although
MPI has been implemented on many platforms such as supercomputers and dedicated PC cluster systems, such platforms are only accessible to universities
or companies because their installation and maintenance costs are very expensive.<br><br>The objective of our project is to realize an inexpensive PC cluster
system by using existing PC networks introduced to high schools for educational use.<br><br>Because such networks are used daily by many students, we
cannot change any system configurations permanently.<br><br>In order to overcome such these limitations, we have developed a method that builds a
temporary PC cluster from bootable CD-ROM.<br><br>The system does not change any static system configurations and is very inexpensive because, apart
from the CD-media, it is entirely made from freeware.<br><br>In addition, it can always be constructed by the CD-ROM even if a DHCP server operates in the
system.<br><br>The overall performance of the PC cluster system constructed over the PC network in our high school was experimentally evaluated.<br>
<br>The experiment results indicated a peak performance of 9476 MFLOPS was achieved by 64 Pentium-4, 2.2-GHz PCs, a performance as good as that of a
commercial supercomputer.<br><br>
2005 - CS055
A NEW MULTIDIMENSIONAL FEATURE-MAPPING ALGORITHM
Ocan Sankur
Galatasaray High School, Istanbul, Turkey
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Feature mapping algorithms are useful in visualizing on a plane the similarities between multidimensional objects. In other words, mapping algorithms reduce
the dimensionality of the data while preserving the similarities between objects and resulting in subjectively plausible visualizations. We propose a new
mapping algorithm that we call “Circular Map”. We demonstrate that it can create correct maps, which are often in agreement with the ones given by the
algorithms in the literature, and furthermore it outperforms them. We apply our algorithm on a novel application, that of language clustering in terms of their ngram features.
Awards won at the 2005 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
DuPont will additionally recognize Honorable Mention winners with an award of $500. - National Aeronautics and Space Administration
Scholarship Award of $1,000 - National Collegiate Inventors and Innovators Alliance/The Lemelson Foundation
2005 - CS309
DESPERTARTE
Jobad Abdeel Sayas Hernandez, Jose Alberto Medina Hernandez
C.B.T.i.s. No.168, Aguascalientes, Aguascalientes, Mexico
DespoertARTE es a VisualBasic computarized system program developed to stimulate and encourage drawing habilities in youth and teenagers thru a
systematic learning procedure.<br><br>From the early years, children have natural drawing abilities and interests but their interpretation of world and reality are
not completely defined. Once we become teenagers we start better assimilating reality but our interest for drawing decrease.<br><br>Drawing is an ability than,
once well developed, can be used not only for art but at any aspect of human activity: 1) Drawing stimulates our creativity and mind processes 2) Drawing
allows us to visually represent human thinking 3) Drawing allows to better communicate our thinking to others<br><br>DespertARTE is conceived as an
specialized drawing school, and is structured in 6 modules: 1) Drawing Overview 2) Basic Techniques 3) Technical Drawing 4) Comics Drawing 5) Portrait
Drawing 6) The Virtual Room<br><br>Each of the first five include three areas:<br><br>a) The Audiovisual: Uses multimedia to show the theme contents.<br>
<br>b) The Reception: It allows to update contents and student profile registries.<br><br>c) The Studio: A space to practice the knowledge acquired in the
audiovisual area organized according to the drawing classification (i.e. the human body structure classified by age and gender)<br><br>The software can be
used for educational programs in public and private schools at all different grades.
Awards won at the 2005 ISEF
Excellence Team Award of $250 - Society for Technical Communication
2005 - CS043
EFFICIENCY TRENDS OF DISTRIBUTED PROCESSING IN JAVA
Christopher Gunnar Siden
Southside High School, Greenville, South Carolina
The purpose of this experiment is to find basic trends in the efficiency and speed of distributed processing in Java along with their strengths. The experiment
involved dividing a simple task between multiple nodes and measuring the time it took to perform the task. The task used in this experiment was the integration
of the function y = x*sin(x)+x+1. The speed was measured in iterations per second and the efficiency was calculated as the speed achieved with a certian
number of nodes divided by an expected speed, estimated by multiplying the speed for one node to perform the task by the total number of nodes. The
distribution took place by reading and writting information to sockets on other computers. One central computer was used as the main node which was given
the number of iterations to run which then divides the task into equal size tasks and distributes them to other nodes. The trends found were that speed
increased with more nodes to a certain point and efficiency decreased. The increase in speed is probably due to the increase in processors available to do
calculations, however at a certain point the speed actually decreased because more time was lost in data transfer overhead then was saved by adding an extra
processor. This experiment shows that the amount of nodes is a very important part of distributed processing and must be carefully suited to the size of the
task.<br><br>
2005 - CS039
PIXEL FILTRATION AND IMAGE ANALYSIS UTILIZING ARTIFICIAL INTELLIGENCE
Daniel Stephen Smith
Episcopal High School, Jacksonville, Florida, USA
The objective of this project was to design a program that can dynamically filter out the background in a digital image of a vehicle and correctly identify the
vehicle by analyzing data from the filtered image using an artificial neural network. Neural networks are extremely dynamic and efficient systems used to make
very accurate predictions with large amounts of data, such as that in a digital image. The investigation began with taking pictures of vehicles as they passed by
with a digital camera stabilized on a tripod. A picture of the background at the same location, when no cars were present, was also taken for later background
subtraction. A program was developed in Visual Basic that processed the image to remove the background behind the car as well as filter out extra noise in the
image. The program was then able to measure dimensions of the vehicle such as length, width, height, and slopes of the front and back windows. Data was
collected for 500 vehicles. This data was then plugged into a neural network that was written in C++ using a three-layer system. The neural network trained on
this data, and then data from 100 more vehicles were plugged into it for predictions. The neural network was very successful in its predictions, identifying most
vehicles correctly, and had an error percentage of 13.95%. <br><br>This system has many real world applications ranging from military and astronomy use to
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
everyday traffic enforcement.
2005 - CS051
IDENTIFICATION OF DIFFERENTIAL SURFACE PROPERTIES ON A TRIANGLE MESH FOR FACIAL AND OBJECT RECOGNITION
Justin Moore Solomon
Thomas Jefferson HS for Science and Technology, Alexandria, Virginia, USA
A new algorithm for facial and object recognition using triangle mesh data from laser scans, 3D reconstruction programs, or other similar input sources is
presented. Rather than comparing vertex and edge locations, however, a preprocessing step is added in which several intrinsic differential characteristics of the
models are identified, including ridges, parabolic curves, and principal directions. This data is calculated using specialized mesh processing methods which
were developed as part of the recognition algorithm. Then, properties are compared using the largest clique size of association graphs constructed from the
geometric mesh data. Preliminary testing using object meshes yielded an average rate of 90% recognition. While facial recognition rates were less, they were
significantly increased using higher mesh resolution, indicating that recognition could be much more accurate using high-quality meshes on a faster machine.
The algorithms developed for this project can be used in a variety of applications related to computer vision and graphics such as biometric identification.
Awards won at the 2005 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
Scholarship Award of $20,000 - Department of Homeland Security, University Programs Office
First Award of $3,000 - Computer Science - Presented by Intel Foundation
Award of Merit of $250 - Society of Exploration Geophysicists
2005 - CS005
DISCRETIZATION-POLYMORPHIC DATA CONCEALMENT AND PROTECTION SYSTEM "STEALTH"
Danil Aleksandrovich Somsikov
Gymnasium #1, named in honour of Alla Bystrina, Odessa, Ukraine
The system is intended to protect various types of data stored on a user’s PC against unauthorized access. The system conceals the very fact that certain
information exists on the computer and hampers the identification of its meaning if the information is detected. The data protected by the “Stealth” system are
invisible on the user’s PC; therefore it can be neither read nor deleted or copied.<br><br> This problem is relevant because of a large amount of information
resources, existing today. Such resource can be confidential and must be protected.<br><br> The work was completed in following phases:<br><br>- analysis
of existing related products and approaches;<br><br>- problem breakdown;<br><br>- analysis of existing methods for solving outlined sub problems;<br><br>developing new methods for solving some of the sub problems;<br><br>- planning software complex, including building its level and object models;<br><br>delineating and program implementing of separate system constituents as functional modules.<br><br> The result of the work is a software complex which fully
solves the given problem. Data concealment and protection service, provided by the “Stealth” system, can be obtained both by end users and by computer
programs (e.g. for protecting their internal information). The “Stealth” system is intended as a data protection system of medium capability. Due to that it can be
useful both for organizations and for private persons. The expected field of the system application is protection of data, which are personal or commercial
secrets or for some reason shouldn’t be available for unauthorized access.<br><br>
Awards won at the 2005 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
Honorable Mention Award Certificates for International students and students under the age of 16 - National Aeronautics and Space Administration
2005 - CS303
SEARCH FOR TOMORROW: A STUDY OF ASSOCIATIVE LINGUISTICS TO OPTIMIZE SEARCH TECHNOLOGY
Harish Mayur Srinivasan, Jason Hamid Rezvanian
duPont Manual High School, Louisville, KY, United States
The increasing magnitude of information available to any person with access to Internet search technology is both impressive and empowering. On the other
hand, the recent proliferation of search results with financially motivated, media-oriented, and increasingly specialized web sites has highlighted the inefficiency
of today's search engines. This is symptomatic of the industry standard methodology of ranking results by one of a variety of popularity gauges. Research
suggested that the most effective means of alleviating problems in effective results presentation is to divide the results into clearly identified categories.<br>
<br> Extensive research and iteratively refined programming produced Project Mingle. This product is an innovative new breed of webcrawler — utilizing a
refined keyword identification algorithm and associative linguistics to compile a list of logical associations. Associations were compared in consideration of
frequency, significance, and relevance to each page in the search results subset. Search results were divided into categories defined by their corresponding
associations. In one such example, some of the most popular categories in a search for the term "nuclear" were "weapons — uranium," "energy —<br>
<br>power," and "university — research." The implications of this revolutionary new technology reach far beyond merely enhancing the user experience of
search technology. This program would create the potential to measure the physics of an idea's presence (as represented by a search category) on the
Internet. Back-end administrators could monitor the growth, momentum, and geographic spread of an idea. Applications range from refining marketing data to
tracking fashion trends to enhancing counter-terrorism intelligence.
Awards won at the 2005 ISEF
Team First Award of $500 for each team member - IEEE Computer Society
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Second Award of $1,500 - Team Projects - Presented by Ricoh
Award of $3,000 in savings bonds to be shared equally by team members, certificates of achievement and gold medallions - U.S. Army
2005 - CS037
VORTECS 3D - VLSI OBJECT RECOGNITION TRAINABLE EMBEDDED CMOS SYSTEM
Malcolm Christopher Ross Stagg
Calgary Christian School, Calgary AB, Canada
The purpose of this project is to create a system for three-dimensional machine vision. A novel method is proposed to generate curves and vector lines in twoand three-dimensional space rather than the use of a large depth map. Thus, scene complexity dictates size of the stored object, and resolution is very high.
Without the use of a controlled environment, features are stereoscopically matched in extremely small search space for an output of three-dimensional faces
and curved surfaces. By implementation of this onto a Xilinx Spartan 3 FPGA in VHDL, existing and newly created algorithms are modified for real-time
performance; many capable of running at the same speed as the incoming video, with a system clock of 50MHz. Digital cameras are modified for a high
resolution color input. Raw data and sync signals are taken directly from the CCD processing chip within each camera. This Bayer coded data is interpolated
into RGB and YUV, averaged with recent data, then presented to the filtering algorithms, which find object edges, represent the edges as curves, find the
possible match space, stereoscopically generate the three-dimensional output, and present to a multilayer neural network. This network is implemented in two
mediums: a MATLAB computer simulation and design of an analog CMOS 0.35um chip. Design uses a novel implementation of FGMOSFETs as synapses to
store and update weights. Data representation by angles and distances as well as connections allows for built-in invariance to rotation and translation in threedimensional space.
Awards won at the 2005 ISEF
Honorable Mention Award of $250 - Eastman Kodak Company
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2005 - CS312
WHAT CHARACTERISTICS IN A DIGITAL IMAGE ALLOW FOR HIGHER COMPRESSION RATIOS USING THE JPEG LOSSY ALGORITHM?
Luis Miguel Suarez, Alex Mao Zhang
Dreher High School, Columbia, SC, U.S.A.
The experiment was designed to discern what characteristics of a digital image allow for higher compression ratios using the JPEG lossy algorithm. ArcSoft
PhotoStudio 5 was used to save the various digital images as an uncompressed TIF or bitmap file and then as JPEG files at four different levels of
compression. These various levels were 75%, 50%, 25%, and 1% image quality according to this program. Because “image quality” varies among different
programs only ArcSoft PhotoStudio 5 was used for this experiment. All of the digital images were 2272 pixels by 1704 pixels, the color images being 24-bits per
pixel and grayscale images 8-bits per pixel. The actual experiment was broken up into four separate tests comparing the compression ratios of color versus
grayscale images, continuous tone images versus images with sharp tonal transitions, luminance alterations versus hue and saturation alterations, and lastly,
linearly simple versus linearly complex images. All four of these tests included realistic digital images (photo quality) and unrealistic, geometric-like images
which were created on Paint. <br><br> As expected, color images compressed more than the grayscale images, the continuous tone images compressed more
than the images with sharp tonal transitions, the luminance alterations had the highest compression ratio when compared to hue and saturation alterations, and
lastly, the linearly simple images compressed more than the linearly complex images. Overall, unrealistic images compressed more than realistic images.<br>
<br> This project gives an in-depth view into the JPEG algorithm and confirms the importance of certain characteristics when compressing digital images. <br>
<br>
2005 - CS020
AN INTENSITY EQUALIZATION PROCEDURE TO IMPROVE THE DISPLAY OF COLOR IMAGES
Katharine Anne Symanow
Divine Child High School, Dearborn, Michigan, 48128
This experiment investigated the development of an automated procedure to correct color and contrast errors in digital photographs when they are shown on
computer displays. The color spectral responses of the human eye, digital cameras, and computer displays have different sensitivities across the spectrum of
visible light. These differences result in errors in the color and contrast of images taken by a digital camera and then displayed on a computer screen - the
display does not reproduce colors with the same spectral response as the camera sensor recorded them. In this experiment, sets of calibration images with
known color content are created using software tools. (These are the controls and independent variables.) The images are then displayed on the computer
screen and photographed by the camera. Using software tools, the differences between the known calibration images and measured camera images are
calculated (dependent variables). This error information, which is caused by the differences between the spectral response of the camera and display, is then
used in a processing procedure called “image equalization” to systematically correct photographs taken by the camera when they are shown on that display.
The result is images that have better color accuracy and less contrast error. The experiment found that gray-scale color calibration images were superior to
separate red, green, and blue color images in correcting the white-balance of the images. The experiment also found that radial lens errors and cross-channel
color errors reduce the quality of digital camera photographs.
Awards won at the 2005 ISEF
Third Award of $300 - Association for Computing Machinery
Tuition Scholarship of $105,000 - Drexel University
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2005 - CS033
MODELING ENVIRONMENT FOR PARTICLES CONFIGURATION CONTROL IN MAGNETIC AND ELECTRIC FIELDS
Maxim S. Tkachenko
Lyceum of Information Technologies # 1533. Moscow, Russia
3D modeling environment for performing virtual physical experiments has been developed that allows to heat and trap charged particles as well as to perform
diagnostics of the relevant phenomena.<br><br>The use of such toolkit essentially alleviates the understanding of the studied processes and helps researcher
to obtain meaningful results in physical science and its applications.<br><br>The project is based on simulation of a pre-defined space-time configuration of
charged particles in order to study their behavior in electric and magnetic fields. To provide universality, flexibility and processing speed of the proposed
modeling toolkit a wide range of software tools has been applied. Flexible and user-friendly program interface has been based on Borland Delphi. All
calculations are carried out by means of a program module written in Compaq Fortran. Visualizing modules initially written in Delphi have been revised in
Microsoft Visual Studio C++ with the use of OpenGL library. As a result the rendering speed has substantially increased. <br><br>Modeling multi-particle
processes (e.g., in plasma) is a formidable task – just because of a large number of particles. This problem has been solved using the algorithms of data
compression and specially elaborated schemes for memory allocation. Several networked computers can be used for the task. Windows Socket Architecture
(WSA) has been applied for distributed calculation scheme. Client-server architecture handling routing and resilience has been implemented.<br><br>
Awards won at the 2005 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2005 - CS009
ARTIFICIAL INTELLIGENT TRAFFIC LIGHT SIMULATION
Harun Tosun
Dove Science Academy, Oklahoma City OK, US
Currently, there is too much traffic at the intersection of N. Penn Ave. and W. Memorial Rd. in Oklahoma City at peak hours which creates traffic jams.
According to City of Oklahoma City, the biggest reason for traffic jams is the feeder roads near the intersection. Oklahoma City’s solution was adding new lines
to the roads. In this study, I built a computer simulation model of the traffic light system at the intersection using Arena Simulation Software. Then I simulated
the traffic light scheduling system currently used by the City and an alternative scheduling system I created. The average waiting time for cars to pass the
intersection in my simulation is about 30% less than the waiting time in the current scheduling system. Results of simulations showed that it would be possible
to reduce traffic jams if a different schedule is used for traffic lights. <br><br>
2005 - CS024
DISTRIBUTED COMPUTING LOW-END CLUSTERING
Timothy John Van Cott
Arkansas School for Mathematics and Sciences, Hot Springs Arkansas, US
The goal of this experiment was to construct a distributed computer and test it using an Open Source benchmark. The distributed computer was a Linux Cluster
based on Parallel Knoppix that consisted of a baseline node and a slave node. The slave node was alternated with another computer to determine the level of
benefit from clustering with two computers. The benchmark utilized to measure the performance of the cluster is called HPCC.<br><br> The first step of the
experiment was to configure the computers to efficiently run the CD distribution of Linux and the cluster libraries. This process included basic hardware
configuration and swapping of hardware components to ensure optimal performance. Each machine in the cluster was then tested for functionality using a Dell
diagnostic CD.<br><br>The baseline performance indicator – the laptop – was booted into Linux using the previously made Parallel Knoppix CD and the
Knoppix Terminal server was initiated. The other two computers were booted using PXE and the LAM-MPI cluster was configured using the provided script.
The benchmark binaries were distributed among the cluster and then the baseline computer was benchmarked in parallel with each of the two other computers.
After repeating the benchmark three times, the result files were copied to a network FTP server and entered into Excel for analysis.<br><br>It was concluded
that the performance of the baseline PC in parallel with both a Dell PowerEdge Server and a Shuttle XPC indeed matched the sum of the individual
performance levels. In fact, the performance of the cluster exceeded the numerical combination. This performance gain was attributed to parallelism in the
linear algebra system used by the HPL component of the HPCC benchmark.
2005 - CS315
DYPTERIX PANAMENSIS: TECHNOLOGIC INNOVATION FOR MONITORING AND CONTROL
Emmanuelle Vargas-Valenciano, Virgilio Solis_Rojas, Fabian Vasquez_Sancho
Colegio Científico Costarricense, Sede San Carlos, San Carlos, Alajuela, Costa R
This research Project Developed a technological system combining electronics, computer science and the environment in order to monitor the tree population
of a particular region.<br><br>Visual Basic.NET 2002 was used in developing the Project. This program permits interaction between remote users and a server
by means of XML Web Services. The information on the server is periodically up dated using this program.<br><br>The project required assembly of various
electronic devices (transmitters, receivers, and oscillators) that were connected to the client by means of a parallel port. These circuits were programmed to
detect felled trees.<br><br>In addition the Project has various forms, web pages and web services that make it possible for the program to acquire and manage
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
important information from each of the elements, such as trees, farms and others. Furthermore, the application has ability to position the elements
geographically. <br><br>When the Project is implemented it can determinate if a tree has been felled and where.<br><br>This project has the ability of
controlling human activities that cause considerable impact on the environment, such as illegal logging of trees. In Costa Rica, this problem is critical in the
case of the Mountain “Almendro” (Dypterix panamensis), and a moratorium on logging has been established by law. In addition this species provides shelter
and food to many animal species among which is the Great Green Macaw (Ara ambigua), an endangered species<br><br>
2005 - CS036
DNB - THE SCRIPT GENERATOR
Sergio Luiz Wermuth Figueras
Colegio Estadual Dom Alano Marie Du Noday, Palmas, Tocantins, Brazil
The Project is directed to computer sciences, and tries to solve health problems, caused by a disease known as LER, (Lesion by Repetitive Effort). I have two
main goals: build a voice recognition free platform engine, which is compatible with new recognition techniques and error verification engines and eradicate
LER (Lesion by Repetitive Effort), as an issue in the programming market.<br><br>I created a programming language editor that obeys voice command.
Programmers don’t need to type the commands; they just need to say them, without using the keyboard at all. Market programmers affected by LER will be
able to work again. Visually deficient programmers will be able to use the editor, and through voice synthesizing modules they will be able to check the codes,
allowing them to create software through speech.<br><br>The engine is compatible with various computer architectures, and it relies on neural nets
applications and artificial intelligence to reach a broader performance and recognition rate than current engines, improving its biometry while supporting
programmers.<br><br>
2005 - CS011
VOICE-CONTROLLOED HOME
Jay Mitchell Wood
Hope Academy, Talladega, AL, Talladega
The purpose for this project was to find out if it is possible to create a voice controlled environment inexpensively with a home pc. My hypothesis is that it is
possible to create a voice controlled environment inexpensively with a home pc.<br><br> First, the circuit board from an old keyboard was modified by <br>
<br>de-soldering the Caps Lock, the Scroll Lock, and the Number Lock LEDs from the board. Next, relays were wired in place of the LEDs. Connected to the
relays were common household items. The items used were Christmas lights, a blender, and a fan. Finally, software was installed that could take an oral voice
command and initiate a keystroke. Such a keystroke would be the Caps Lock, Number Lock, or Scroll Lock.<br><br> Applications of this could be used in voice
controlled operation of a small number of household items.
2005 - CS304
A CONTENT-BASED GRAYSCALE IMAGE RETRIEVAL SYSTEM BASED ON VECTOR QUANTIZATION
Guan-Long Wu, Hsiao-Ting Yu
National Taichung First Senior High School, Taichung, Taiwan(R.O.C.)
Due to rapid development of Internet, information technology, exponential growth of image databases and digital libraries, and the related researches of image
retrieval have become a very important issue. In this project, we propose an image-based retrieval scheme by vector quantization to retrieve similar images
from an image database according to the pre-calculated image features. <br><br> Vector quantization is a very simple image compression scheme, and we
have applied it to extract features from grayscale images. In order to speed up the retrieval process, we also calculated the mean intensity values of the whole
images and the central part of an image to filter out the images that are significantly dissimilar to the query image.<br><br> <br><br> The experimental results
show that our proposed approach can effectively extract features from the images and enable users to retrieve them from an image database of 2363 images
with 15 classes quickly and accurately. When images stored in image database are matched to the query image, the proposed scheme can instantly retrieve
the stored image in a proper ranking order in terms of similarity. In short, the proposed scheme can retrieve the stored image at a precision of over 87.9% for
the first five ranks. A precision-recall diagram is provided, and so can be compared to other existing approaches.<br><br>
Awards won at the 2005 ISEF
Team Third Award of $300 for each team member - IEEE Computer Society
2005 - CS021
SUFFIX TREE BASED WEB INFORMATION SEARCH SYSTEM
Lianlong Wu
The Affiliated Middle School of Fujian Normal University, Fuzhou, Fujian, China
Recently inverted lists have been widely used in index mechanisms. However, there are constraints on the use of such word-based lists in Chinese, because
there are no delimits to separate Chinese words, which cannot avoid confining the efficiency of indexing. <br><br> This project is aimed at developing a novel
Chinese information search system to facilitate the search for Chinese information. A full-text index mechanism based on a suffix tree is applied to develop the
new system. The segmentation of Chinese words is not necessary. Under functions of the new system, the search for phrases, sentences, and even more
complicated searches, such as a whole paragraph search, is amazingly feasible. Moreover, accurate results can be achieved. <br><br> By means of a strategy
based on length of a query string, sorting is not necessarily fully done for index build. Thus, suffix array algorithm has been improved upon to reduce the time
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
complexity to linear lower bound. An alternately-used array is applied to save workspace. It is proposed that a static array be scanned twice to replace dynamic
queue. A set of rules for processing html text is proposed to efficiently extract content from web pages. Therefore, a high speed and stable system is obtained.
<br><br> The new system was applied in Chinese Web Test collection (CWT-100g). The content was extracted from the 100GB web pages with a speed of
3MB/s. Index build worked at a speed of 1MB/s. The system has finished 285 items of the home/named page finding task. <br><br>
2005 - CS025
A PRACTICAL IMPLEMENTATION OF GRADIENT-BASED CONVOLUTIONAL NEURAL NETWORKS IN HANDWRITING RECOGNITION
Yuetian Xu
Stuyvesant High School, New York, New York, USA
In this project, neural networks are applied to a practical handwriting recognition task. Building on previous work of Yann LeCun et al. with numerical digits, the
handwriting recognition range is extended to the entire printable ASCII set. This larger range presents some unique problems and challenges. The extension is
accomplished through an examination of the state-of-the-art recognizer LeNet 5. Improvements are made in the data preprocessing/input, neural network, and
classifier areas in order to make this system more efficient relative to the given task.<br><br> At each step, a controlled experiment is performed to determine
the accuracy vs. time performance for each choice. The results of the experiment determine whether the proposed change is included in the final system. The
final system attained 93% accuracy compared with the previous best result of 85% on the same Unipen dataset.<br><br> These aforementioned improvements
are to be integrated into a commercial product that is currently in development and slated for release in late 2005.<br><br>
Awards won at the 2005 ISEF
Award of $500 - American Association for Artificial Intelligence
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2005 - CS028
CONSTRUCT A KERNEL-SERVICE-ACCELERATOR WITH SW&HW MESHING AND REALITY MULTI-THREADING PARALLEL PROCESSING FOR
FPGA-BASED RECONFIGURABLE SYSTEM
ZHANG Yizhong
Kongjiang Senior High School, Shanghai, China
Operation Systems for Run-Time Reconfigurable Systems (OS4RS) have become a new field of research in computer science. However, current OS4RS is not
perfect and high-performance process cannot be realized just though optimizing the software algorithm when hardware structures are under the condition of
dynamical change on a SoC (System-on-a-chip).<br><br> In view of this problem, a key concept of System-Patience-Inversion (S.P.I.) is introduced. Thus, a
novel architecture about Real-Time kernel of OS4RS is proposed.<br><br> In the paper, the purpose of the author is to put a part of kernel service (TasksSwitching, etc.) at the bottom layer of the computer architecture to control the hardware directly. Thus, a Kernel-Service-Accelerator(KSA) is developed. The
accelerator does not need complex Micro-Processor (uP) to manage SW&HW (software-independent hardware). Thus it makes possible to obtain more
available hardware resources. The accelerator combines with hardware logic that supports the systems Run-Time Partial Reconfiguration (RTR). It is aimed at
realizing systems Self-Configuration (SC) independent of the uP and supporting kernel service speed-up and improving the efficiency of SW&HW Co-Work in
one Field-Programmable-Gate-Array (FPGA) inside.<br><br> The RTL code of the KSA is written to generate the reusable IP-core. This KSA for OS4RS is
able to allow a task switch between SW and HW by a new mode of static priority-driven preemptive (SPDP) scheduling, and to configure corresponding
Processing Unit, Memory Unit or Logic Function Unit for various tasks. Multi-Threading speed-up process with parallel process mode in FPGA-based
Reconfigurable environments is achieved.<br><br>
Awards won at the 2005 ISEF
Second Award of $500 - Association for Computing Machinery
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
Second Award of $150 - Patent and Trademark Office Society
2005 - CS018
HANDWRITTEN NUMBER RECOGNITION: ANTS VS. TEMPLATES
Holly Kristine Zelnio
Chaminade-Julienne Catholic High School, Dayton, Ohio, USA
This project developed a computer program that recognized handwritten numbers using template matching. The computer program’s recognition accuracy was
compared to an ant program that was developed by the author last year. The hypothesis of this project was that the ant program would have higher
performance than the template matching approach. It was hypothesized that the more flexible matching of the ant program would perform better against the
variability of handwritten numbers. Using the same testing approach as last year, the template matching approach was shown to have better performance than
the ant program. The results were statistically significant. Hence, the hypothesis was proven false. The second part of this project investigated ways to improve
the performance of the template matching program. Several approaches were developed and compared. Significant performance improvement over both the
ant program and the original template approach was demonstrated.
Awards won at the 2005 ISEF
Award of $500 - American Association for Artificial Intelligence
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2006 - CS009
MULTI-PURPOSE DESIGNING MACHINE
Amjad Fakhri Abu Salem
Terra Sancta College, Amman, Jordan
The main idea of this project is the integration of computer, electrical and mechanical sub systems in order to construct an artificially intelligent device having
the ability to cut things like wood in a specific shape designed by the user and to be a multi-purpose designing machine that can be used to:<br><br>1.Design
accurate and small objects (which allows it to be used in a lathe).<br><br>2.Hemstitch the clothes (allowing it to be used in a weaving factory).<br><br>3.To
drill objects (used for a carpenter).<br><br>In order to achieve this project three main issues were taken into consideration, the first issue is to program the
software in the computer to give the user the ability to draw, the second issue is to send the coordinates of the drawing throw the parallel port to the hardware
and the third one is to program a microcontroller (PIC 16F876) in order to get those coordinates and give the orders to drive the stepper motors as wanted.<br>
<br>The program in the computer was programmed using VB (Visual Basic) and the coordinates were given to the hardware by calculating all of the
coordinates in a drawn design and sending them to the parallel port using (io.dll). Finally the PIC microcontroller was programmed using assembly language
and a PIC programmer bought from AL – Baheth Center.<br><br>So the following can summarize the whole operation of the project:<br><br>The user draw a
shape on a program in the computer and the hardware does cut a piece of wood the same shape as in the program or does other things from the 3 purposes
mentioned above. <br><br>But what is important to mention is that my prototype doesn’t cut, doesn’t drill and doesn’t hemstitch at the moment, rather it does
draw the drawn design over a paper.<br><br> <br><br>
2006 - CS315
CAE: COMPUTER ANALYSIS OF EXERCISES PART 2
Alicia Renee Andrade, Jacqueline E. Mooney
Wachusett Regional High School, Holden MA, USA
The main goal of this year's project was to create a series of computer programs to easily and affordably analyze an athlete’s movements without extensive
user input. Three new computer programs were written in conjunction with a modified device driver to capture images from footage of the movements. These
new programs broke down and analyzed the athelete's movements by independently completing the segments not requiring operator intelligence. The resulting
analysis could then be used to help the athlete perfect his/her performance.<br><br>An athlete was videotaped with a digital camcorder as he/she performed
certain moves. The athlete wore color-coded markers designating key joints of the body. Separate colors denoted specific areas of the body so that the regions
could be analyzed individually. These images were captured using a modified version of a device driver called VBStillCaputreAJ, which allowed the computer to
independently capture all still frames needed for the following programs. The images were synchronized using the Exercise Picture Program. The Exercise
Data Program allowed the computer to locate the points of movement by color sequencing. The Exercise Analysis Program drew the data for successive trials
of the same movement and analyzed the error between specific areas of the body or the entire area of movement. A stick figure was superimposed on top of
the footage to provide easy comparisons between trials. <br><br>
Awards won at the 2006 ISEF
Team Second Award of $400 for each team member - IEEE Computer Society
2006 - CS304
WORK SCHEDULING: A CONFLICT IN AMERICA
David Omid Attarzadeh, Josh Whelchel
duPont Manual High School, Louisville, KY, USA
Across the nation, managers and shift leaders spend countless hours scheduling their employees into shifts. Unfortunately, schedules must constantly be
redrafted and manipulated as new information becomes available.<br><br> We hypothesized that by automating this process, we could save thousands of
hours once occupied by creating and modifying these schedules. By developing a website that would allow employers to enter their employees into a database
and manage their positions and priority ratings, and also allow these employees to log-in, swap hours, choose preferred shifts, request days off, and perform
other schedule maintenance tasks, we could shave off hours of work and hassle from the schedule maintainer.<br><br> To develop this software, we started
by designing a Java sketch of the online system. However, due to limitations posed by Java regarding memory, database manipulation, and easy user frontend design, we decided to move to a more solid approach. Using the PHP language with a MySQL database back-end allowed us to develop the online version
of this software.<br><br> Through the online interface, we were able to manipulate a mock work schedule much more quickly than we might have by hand. The
algorithms designed allowed us to place mock employees in shifts that they preferred and were available to work, in addition to saving us the time required to
distribute the schedule to each employee. Managers not only have the ability to quickly oversee all requests and shift trades, but they can also quickly access
an employee's contact information should a problem arise.
2006 - CS301
AN IMAGE MATCHING PROGRAM TO AUTOMATE HOMELAND SECURITY
Christopher Robert Bethel, Elizabeth Alice Ennis
Lake Brantley High School, Altamonte Springs, Florida, USA
Terrorism has recently become a large topic in the United States. New measures have been taken in public places, especially airports. If you have been to an
airport recently I'm sure that you have heard the messages of intercoms announcing that if you spot unattended luggage then to please report it. For our project
we hoped to create an image matching program that would alert a security guard if there was a potential threat of unattended baggage.<br><br> To accomplish
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
this task we first learned how to use Matlab. Matlab, which stands for Matrix Laboratory, is a software package used for many functions, including image
analysis. In order to teach ourselves this interactive programming environment we studied it through books and used Matlab's online help section. <br><br> We
then applied our knowledge of Matlab in making a program for this image matching airport scenario. We used video from a digital camera and then separated it
out into its separate frames. These frames were then read into the program and one would be compared to another to see if there was any change. We then
determined whether the amount of change was significant or not. If it was determined significant then the objects that were different were then compared to see
if they were the same object. If the different object remained for 60 seconds then we determined it to be a possible threat and an alert would appear. If the
objects were not the same then the whole process started over.
2006 - CS003
FROM SCRAP TO SUPERCOMPUTER
Terrence Dean Bonner
Fort White High School, Fort White Florida, USA
This experiment is to make an easily usable supercomputer that will quicken a task that usually would take a very long time to accomplish. I will do this by using
older model computers that many people or businesses would normally throw away. I hope to get many sponsors that will donate their used PC's or give me
some new ones. The experiment will compile source code for Parallel Knopplx Linux that would normally take about 45 minutes on my fastest PC. I also hope
to cut the time down to about 10 minutes or less and to achieve one gigaflop supercomputer calculation speed. I only went up to a little more than 100,000
megaflops but that was a very good accomplishment with the computers that I was using. There were some problems with the OpenMosix Measurement
system. It had some problems accurately measuring the speed on the nodes during my main experiment. My actually Flop Rang was just under 1 Gigaflop at
800 megaflops. This was caused by a script error with did not accurately measure the true speed on node 1. It will be corrected
2006 - CS011
MODELING WAVE CHARACTERISTICS
James Daniel Brandenburg
Cocoa High School, Cocoa, Florida, USA
The purpose of this project was to develop a software model using trigonometry, vectors, matrices, and programming methods to display splashing,
constructive and destructive interference, attenuation, and reflection of waves in water. This project also provided an opportunity to research, learn and
understand new forms of mathematics and programming methods. <br><br> This project consisted of two distinct procedures; first, writing a software model of
the wave characteristics using the mathematics learned and second, building a wave chamber to collect data to compare to the software model. The first
process began by writing a program for each wave characteristic. This allowed the researcher to learn the mathematics and programming methods to simulate
each characteristic individually. Then, each program was combined into a top and side view software model that simulated all these wave characteristics. The
second process included using a top view wave pool and building a side view wave chamber for acquiring data to be compared to the software model. The
researcher videotaped the wave phenomena and then studied the results using video editing software. The data was also used to write functions that enhanced
the accuracy of the simulation. <br><br> The researcher concluded that it is possible to write a software model to simulate constructive and destructive
interference, attenuation, reflection and splashing in water using trigonometry, vectors and matrices. This software could be used in the classroom setting to
illustrate wave behaviors, as well as for developing software that simulates wave action in canals, lagoons, harbors and shipping channels. <br><br>
Awards won at the 2006 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2006 - CS008
TESTING THE EFFICIENT MARKET HYPOTHESIS WITH ARTIFICIAL NEURAL NETWORKS AND GENETIC ALGORITHMS
Benjamin Louis Brinkopf
Canterbury School, Fort Myers, FL USA
The semi-strong theory of the Efficient Market Hypothesis states that inefficiencies deriving from publicly exchanged information in the marketplace are
impossibilities. Furthermore, the indices automatically reflect changes within the system and extracting anomalies that benefit the investor over long intervals
will not return any appreciable gain.<br><br> This macro-economic theory has provided the cornerstone for explaining efficiencies and inefficiencies
manifesting within market trading. While several studies accept this theory, increasing amounts are reverting to Keynesian economic principles for market
theory.<br><br> Last year’s research studied the weak-form of the Efficient Market Hypothesis (EMH), and I found that, over time, it is possible to extract
certain anomalies from inefficient manifestations in the market using purely historical data as the predictor variable. This year I am studying the numerous
variables and their nonlinear effects on inefficiencies within markets. Implementing artificial neural networks and genetic algorithms ensures creative patterns
that, combined with the variables, produce their own inefficiencies over time that allow the investor to consistently ‘beat the market’ and extract gains. <br><br>
Numerous variables have been used, including time of year, holiday effects, weather, firm size, P/E ratio, beta, commodities and consumer sentiment to
uncover inefficiencies that directly contrast the semi-strong theory of the Efficient Market Hypothesis. Since these variables must be nontraditional (to avoid
market efficiencies), they have been employed with neural networks and genetic algorithms. <br><br> Further, besides extracting gains for the investor over
long intervals, these variables manifesting anomalies and inefficiencies within the market disprove the foundation of the Efficient Market Hypothesis. <br><br>
Awards won at the 2006 ISEF
Scholarship Award of $12,500 per year, renewable annually - Florida Institute of Technology
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2006 - CS010
STEREOVISION CORRESPONDENCE USING WAVELET BASED DYNAMIC PROGRAMMING
Alex Conway Buchanan
Myers Park High School, Charlotte, North Carolina, US
This project presents a new stereo-correspondence algorithm using a discrete Haar wavelet decomposition in a dynamic programming approach. The stereo
image correspondence problem involves quickly and accurately determining the locations in two stereo camera images that are the projection of the same point
in physical space. This correspondence is necessary in determining the three dimensional structure imaged by the stereophotographs. This method
quantitatively detects occluded regions, regions that are imaged in one photograph but not the other or in cases of narrow occlusion where there is a failure in
the stereo ordering constraint. This algorithm allows for the progressive, efficient search along epipolar lines of the disparity space image. This algorithm was
tested on the Tsukuba stereoimages, provided by the Middlebury stereoimage website, and compared to the results of other algorithms listed on that website.
Rapid, accurate determination of stereo image correspondence is important in a variety of fields ranging from robotics, biomedical engineering, mechanical
engineering, architecture, and biometrics.
Awards won at the 2006 ISEF
Second Award of $500 - Association for Computing Machinery
2006 - CS305
RECONSTRUCTION OF THREE-DIMENSIONAL OBJECTS FROM VIDEO FILMS
Thomas Buenger, Tobias Grelle
Heinrich-Hertz-Oberschule, Berlin, Germany
Our project deals with the reconstruction of three-dimensional data from video images. The chief idea was to create a system that would, like the human eye,
be able to gather information from two-dimensional images or sequences of images. In order to achieve that goal, firstly, we developed a theoretical model,
which would describe mathematically the laws of photography and the connections between the images of a film. Secondly, we dealt with digital image
processing and, eventually, the overall translation into a practical application. So we developed a software application for the extraction and three-dimensional
modeling of visual objects from a given, digital and continuous sequence of images.<br><br> <br><br>One special aspect is the fact that the system is
expected to be able to function with only one camera and to process films, in which the position of the camera changes unpredictably and irregularly. In
contrast to other systems, a calibration is, thus, no longer necessary. Based on these different requirements, the system can be characterized as follows:<br>
<br>Firstly, the still images undergo several systems of filtering, which enhance the quality by noise reduction. Furthermore, we use several algorithms for
finding edges and corners. By analyzing the movements from frame to frame, we are able to calculate the position of the camera. Based upon these data,
spatial coordinates can be calculated and a three-dimensional model can be rendered. <br><br> <br><br>Our procedure is especially suitable for static
objects, so that possible applications might be found in the fields of architecture, landscaping and exploration.<br><br>
Awards won at the 2006 ISEF
Scholarship Award of $1,000 - National Collegiate Inventors and Innovators Alliance/The Lemelson Foundation
2006 - CS037
DEVELOPMENT OF PEDESTRIAN SIMULATION FRAMEWORK BASED ON BEHAVIOR INTERPRETER
Kyuhong Byun
Hansung Science High Scool, san5 Hyeonjo-dong Seodaemun-gu Seoul, South Korea
The study of Pedestrian Dynamics usually consists of three steps: (1) building a new behavior model, (2) creating a simulator corresponding to the behavior
model, and (3) observing some phenomena occurring on the simulation. Whenever a new behavior model is developed, a new simulator need to be developed
in order to accommodate the new characteristics of the behavior model.<br><br>In this work, we propose a framework to simulate a variety of behavior models
of Pedestrian Dynamics. The framework provides the following features: (1) A user-interface to design a set of simulation, (2) common libraries to easily
support the implementation of the behavior model, (3) statistics and analysis to comprehend phenomena occurring on the simulation, and (4) a behavior
interpreter to implement any kind of pedestrian behavior model. This interpreter enables various behavior models to be simulated via framework. By using the
framework, new behavior models can be easily adapted and simulated. <br><br>
2006 - CS316
PALADIN: A NEW FAST AND SECURE SYMMETRIC BLOCK CIPHER
George Chen, Frank Fu-Han Chuang, Victor Andrew Shia
Monta Vista High School, Cupertino, California, USA
The security of the Advanced Encryption Standard (AES) has been questioned numerous times since its adoption in November 2001 due to its elegant
mathematical structure which may introduce a new breed of attacks the cryptography community is less familiar with. We present a new symmetric 512-bit
block cipher Paladin that borrows from the top three AES finalists (Rijndael, Serpent, and Twofish) while addressing security concerns reported. Our design
goals are for Paladin to be semi-conservative, flexible, simple, and efficient while having a long shelf-life.<br><br> We achieve these goals by using known
approaches to cipher design, improving on an established AES S-box construction technique, utilizing functions that can easily be extended or replaced, and
minimizing the memory requirements of the cipher. Regarding security, we find that differential cryptanalysis and linear cryptanalysis both have complexities
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
that make them less feasible than an exhaustive key search. Despite our use of a larger block size than AES and our use of a 1024-bit key, Paladin is faster
than four optimized implementations of AES on the Pentium 4, Athlon, Sempron, and Athlon64 platforms.
Awards won at the 2006 ISEF
First Award of $3,000 - Team Projects - Presented by Science News
Mathematica software package for all Intel Grand award first place winners. - Wolfram Research, Inc.
Trip to European Youth Science Exhibition "ESE 2006" - MILSET
2006 - CS306
LEAF! WHAT ARE YOU?
Hung-Ju Chen, Min Ju Yang
National Hsinchu Girls' Senior High School, Hsinchu City, Taiwan (R.O.C)
Many interesting plants share our living habitat but are unknown to us. To identify them, we usually consult cyclopedias. However, it is impractical to carry
heavy cyclopedias for on-the-spot identification of plants in the field, especially for small kids. To address this issue, we have developed a mobile system that
can help people explore and recognize plants anytime anywhere. <br><br> Our system works as follows. First, Jane takes a picture of an unknown leaf from
her camera phone. Second, the leaf image is wirelessly transmitted to a remote database server and a search engine. Third, this search engine analyzes the
leaf image, extracts certain geometric features, and finds a set of best matched leaves and their corresponding plants. Finally, the search results are returned
and rendered on Jane's camera phone. Our focus is to design a search engine that considers the tradeoff between two conflicting goals: accuracy and
response time. Since search accuracy depends on the amount of computation spent on analyzing, extracting, and matching features from images, more
computation leads to better accuracy; however, it also results in a longer search time. In order to find a good balance between accuracy and response time, our
search engine utilizes a two-stage approach that applies two sets of feature extractions and weight assignments to achieve both good accuracy and response
time.<br><br> The result of the experiments shows that our approach is workable. It can achieve on average 82% recognition accuracy and 17.22 seconds of
response time per leaf query.<br><br>
Awards won at the 2006 ISEF
Award of $500 - American Association for Artificial Intelligence
2006 - CS040
TOUGH OR DIABOLICAL? AN ANALYSIS OF SUDOKU DIFFICULTY LEVEL
Elsa Star Culler
Lincoln Park, Chicago IL, USA
The purpose of this experiment was to determine whether there is a relationship between the difficulty rating of a Sudoku puzzle and the complexity of the
reasoning methods needed to solve the puzzle.<br><br>I wrote a computer program in the Scheme language which used two simple algorithms for filling in
the squares of a Sudoku puzzle. The number of squares not filled in at the end of the program was recorded for Sudoku puzzles of each of the four difficulty
levels. The different samples were then analyzed to determine if there was a statistical difference.<br><br>The higher the difficulty rating of a Sudoku, the
more squares remained unsolved by the computer program. All of the Gentle and Moderate puzzles tested were completely solved by the computer program.
In addition, there was a significant difference between the number of unsolved squares in the sample of Tough puzzles and in the sample of Diabolical puzzles.
Awards won at the 2006 ISEF
Award of $500 - American Association for Artificial Intelligence
2006 - CS307
NEURAL NETWORK SECURITY SYSTEM
David Nicholaas de Klerk, Dirk Johannes Jacobus Oosthuizen
Afrikaanse Hoër Seunskool, Pretoria, South Africa
We discovered that there is a need for a better security system implemented on personal computers.<br><br>We designed system that verifies a person’s
identity in three stages: a password, face recognition and voice recognition. We developed an encryption method to encrypt and decrypt the passwords and we
implemented two separate neural networks to recognize the persons face and voice. We chose neural networks because they excellent for pattern recognition.
<br><br>We experimented with various methods of preprocessing the data that we used as inputs for the neural networks. We discovered that the best way of
preprocessing the data for the voice recognition is to convert a sound wave to a function of amplitude over frequency by using a Fast Fourier Transform (FFT).
For the face recognition, we used the red, blue and green values of a 256x256-pixel bitmap image for inputs.<br><br>We were able to develop a security
system that is very effective at protecting sensitive data in a home or small office environment. The program is also very user friendly and easy to manage as it
does not require much technical knowledge to maintain and use. The program is not hardware intensive and can be run on a standard desktop computer or
notebook.<br><br>
Awards won at the 2006 ISEF
Third Award of $1,000 - Team Projects - Presented by Science News
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2006 - CS002
AN INNOVATIVE WAY TO IMPROVE THE SPEECH SKILLS OF THE HEARING IMPAIRED USING FOURIER ANALYSIS (YEAR III).
Krystelle Andree Denis
Lake Brantley High School, Altamonte Springs, Florida, USA
The purpose of this project is to determine if a visual representation of spoken language, built using a Java based program and Fourier analysis of basic verbal
sounds, can help improve the speech skills of the hearing impaired. The experiment was performed with a selected group of hearing-impaired volunteers
divided into an experimental group and a control group, with similar hearing characteristics.<br><br>All subjects individually pronounced samples of the English
vowel sounds. The experimental group used a visual feedback interface created with a Java program that simplifies vowel sounds using Fast Fourier Transform
and produces a visual representation of the vowel sounds. Using the interface, the experimental group compared the visual results of their speech sounds with
a library of reference sounds. The control group pronounced vowel sounds without the use of the interface. The recorded samples were later analyzed to
establish whether improvement occurred with the use of the visual interface or through simple repetition. This potential improvement was tested statistically by
comparing the harmonics of the subject sounds with the harmonics of the reference sounds.<br><br>The graphed results show that there is improvement in the
speech skills of the experimental group and no apparent improvement in the speech skills of the control group. The statistical analysis of the sound samples
indicates that the experimental group’s speech skills improved to the 95% level of confidence. The results of this experiment support the alternate hypothesis,
showing that the visual feedback interface allows the hearing impaired to improve their speech skills.
2006 - CS007
A NEW APPROACH TO LOSSLESS COMPRESSION
Humberto Diaz Suarez
Southwestern Educational Society, Mayaguez, PR, USA
The intent of this project was to create a lossless compression algorithm capable of compressing files further than current algorithms. The algorithm employed
a three-step process for its work. The first stage searched for patterns in the data and compiled a dictionary of them. The second stage reduced data segments
to the fewest bits required to store them. The final stage used irrational cubic roots to represent potentially infinite sequences of bytes. The performance of the
experimental algorithm was compared against that of WinZip using PPMd and WinRAR on its strongest setting, based on the ratio between the compressed
output size and the original file size. Compression was performed on various file types to ensure a variety of test scenarios, employing a new compression
method. The results demonstrated that the experimental algorithm outperformed the other two in almost all cases, with six exceptions. Thus the project
accomplished its goal.<br><br> The algorithm is currently of limited use to applications requiring fast compression. It is slow and therefore inappropriate for
anything working on time constraints. However, tasks such as compressing data before writing it to long-term storage (CDs, DVDs) are feasible. Its
decompression speeds are much higher. Possible improvements include optimizations on the algorithm’s speed. Software modifications could significantly
decrease compression time. Theoretical changes could also altogether eliminate some time-consuming tasks within the code. Detailed checks on the
performance of individual sub-algorithms within the experimental code could also root out ineffective parts.
2006 - CS006
BACK-IN-TIME DEBUGGER
Vasily Dyachenko
Centre of Mathematical Education, Saint-Petersburg, Russia
In current software industry creation of new design and coding technologies (like programming languages, platforms etc) is one of the major activites. However,
design and coding is only a half of the software development process. Another half of the process is debugging, bug fixing and testing. And unluckily,
functionality and quality of the existing debugging and testing instruments is seriously behind the need - everybody who dealt with computers dealt with
crashes, errors and data losses. This work is devoted to adding new functionality to debuggers. When developer already knows about a error, but does not
know why it happens, he often needs an instrument to trace execution of the program. Traditional debuggers can only peek into the program current state. And
if the program is already working incorrectly the developer often has the only possibility of restarting the program from the beginning. In this work the author
presents a debugger, allowing to execute the program backward. It can show the developer what have happened with the program earlier, before everything
went wrong. This debugger is based on GDB for Linux. In contrast with other attemts of back-in-time debuggers developing, this debugger allows back-in-time
debugging of programs in any language which compiles to native code (including C and C++).<br><br>
Awards won at the 2006 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2006 - CS301
AN IMAGE MATCHING PROGRAM TO AUTOMATE HOMELAND SECURITY
Elizabeth Alice Ennis, Christopher Robert Bethel
Lake Brantley High School, Altamonte Springs, Florida, USA
Terrorism has recently become a large topic in the United States. New measures have been taken in public places, especially airports. If you have been to an
airport recently I'm sure that you have heard the messages of intercoms announcing that if you spot unattended luggage then to please report it. For our project
we hoped to create an image matching program that would alert a security guard if there was a potential threat of unattended baggage.<br><br> To accomplish
this task we first learned how to use Matlab. Matlab, which stands for Matrix Laboratory, is a software package used for many functions, including image
analysis. In order to teach ourselves this interactive programming environment we studied it through books and used Matlab's online help section. <br><br> We
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
then applied our knowledge of Matlab in making a program for this image matching airport scenario. We used video from a digital camera and then separated it
out into its separate frames. These frames were then read into the program and one would be compared to another to see if there was any change. We then
determined whether the amount of change was significant or not. If it was determined significant then the objects that were different were then compared to see
if they were the same object. If the different object remained for 60 seconds then we determined it to be a possible threat and an alert would appear. If the
objects were not the same then the whole process started over.
2006 - CS314
AI: NATURAL LANGUAGE PROCESSING
Xavier Jeremy Falco, Emmanouel Georgiou Liodakis
Hillcrest High School, Midvale, Utah, United States of America
The purpose of this experiment was to evaluate approaches to natural language processing using artificial intelligence. It was hypothesized that an artificial
intelligence program could be implemented with a capacity for natural language processing. To test this, we created a computer program capable of interacting
with human intelligence through written form, building a vocabulary as well as grammatical capabilities. As the user entered data the computer would store
these sentences in an inner dictionary, creating a virtual repertoire of sentences to use. Different analytical methods were utilized to determine the most
effective method for calculating the best reply. 1500 sentences were tested with this program, over a set of three trials, each involving several different
individuals to provide the computer with a varied vocabulary. Based on a set of specific criteria, each sentence was determined as logical or illogical in
reference to the conversation. These standards were decided based on the grammar and coherency of the response in relations to the dialogue between the
user and the computer.<br><br>For each trial, the data showed evidence of a distinct increase in the rate at which the computer used logical replies. Analysis
of this data revealed that the algorithm and the analytical methods used were able to function and grow for a large variety in vocabulary and syntax. The
program that was implemented thus successfully modeled human behavior and proved to be an effective method in cognitive recognition. It set and
demonstrated a successful approach to natural language processing using artificial intelligence. <br><br>
Awards won at the 2006 ISEF
Third Award of $1,000 - Team Projects - Presented by Science News
2006 - CS016
AN EXCEL-LENT TEST ANALYZER
Kelley Marie Fleming
Dove Science Academy, Tulsa Oklahoma, United States of America
The purpose of this project was to find a way to reduce the time teachers spend grading tests, and to eliminate or greatly decrease the amount of money
schools spend on testing software and equipment. I used Microsoft Excel and Microsoft Office Document Imaging 2003 (MODI) to do this. I used Excel
because it is popular around the world and easily available. I wrote a Visual Basic code in the Excel file using the MODI library. The code “reads” multiple
choice answer sheets, using Optical Character Recognition (OCR), and then analyzes the scores. I created a special answer sheet designed to increase the
accuracy of the OCR. <br><br>I performed tests to determine what font and font size would cause the least amount of errors when OCR was performed. I
asked teachers to use the answer sheets when giving tests. The program analyzed the answer sheets, found the choices that were marked, and recorded the
data in another Excel sheet. Then I checked for errors caused by humans and by the computer. I found that the accuracy of the program is approximately 98%.
<br><br>This program can analyze tests that have been marked in a variety of different ways: fill-in, crosses, checks, and dashes. Also, the test taker isn’t
restricted to just using a #2 pencil: different types of pencils and different colors of pens are acceptable. This method of grading tests could help schools all over
the world. It reduces costs, saves time, and is very accurate.<br><br>
Awards won at the 2006 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2006 - CS027
QUANTITATIVE ANALYSIS OF MUSIC IN MP3
Will Rowan Fletcher
Hellgate High School, Missoula MT, United States
The information content or amount of "surprise" in popular music was compared with that in classical music. Songs of both genres in MP3 were loaded into the
data manipulation program Matlab Student Version 7.1 and were broken up into their constituent frequencies by applying a discrete Fourier transform. A
method for quantifying the complexity in a musical piece was devised based upon information theory. The normalized sound spectrum was treated as a
probability distribution function for each frequency, and thus the entropy for the signal as a whole could be calculated. It was concluded that popular music
tends to have higher information entropy and is therefore more complex.<br><br> While evaluating the initial hypothesis, it was observed that, within classical
music in particular, the pieces with greater entropy values tended to have been composed in more recent years. The preliminary discovery of this pattern has
led to an investigation to determine its validity.
Awards won at the 2006 ISEF
Tuition Scholarship of $105,000 - Drexel University
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2006 - CS313
HUMANSIS
Oscar Federico Flores Galvan, Jose de Jesus Briones Serna
CBTis 168, Aguascalientes, Aguascalientes, MEXICO
<br><br>“HumanSIS” is a computer package composed by Software and Hardware. It's purpose is to ACCELERATE the teaching-apprenticeship process of
the human skeletal system; it counts with a chronological monitor to measure the spent time in the navigation of its five main modules:<br><br>1: The teacher’s
room. - It has two applications: a) User’s registry and b) System’s subjects update, including all its multimedia elements.<br><br>2: Virtual X Ray Laboratory. Is the place where a virtual projection can be done through the scanning of the human skeleton.<br><br>3: Practice Workshop. - The Area where the acquired
knowledge in the X Ray Laboratory is verified, confirmed and ratified.<br><br>4: Bones and Joints Room. - It’s the classroom where is carried out the study of
the topics related to the human body’s joints.<br><br>5: Evaluation Center. - It’s the space where the student has the virtual possibility of being certified about
the identification and location processes of the bones and joints of the human’s skeletal system.<br><br>In addition the system counts with an electronic
peripheral device that lets the student interact with some system’s didactic applications. Its application area is in the educative institutions which offer study
programs like medicine, sick bay, anatomy, and first aid, so on.<br><br>
Awards won at the 2006 ISEF
Fourth Award of $500 - Team Projects - Presented by Science News
2006 - CS318
A NEW ALGORITHM TO MINIMIZE FACTORY INEFFICIENCY THROUGH PENALTY REDUCTION
Andrew David Gamalski, Vinayak Muralidhar
Hamilton High School, Chandler AZ, USA
The objective of this research is to create a more efficient linear programming heuristic (LPH) which will maximize gross profit in a factory that employs multiple
machines with jobs that have equal processing times and release dates and to discover the structure of the problem and runtimes of the integer and linear
programs.<br><br> An integer program (IP) was formulated to mathematically describe the problem and provide a basis for comparison for the other heuristics.
C++ was used to generate the data and run the heuristics. The integer and linear programs were run in AMPL (a mathematical modeling language).<br><br>
Well known heuristics were programmed and used for comparison. These heuristics were priority first (PF), apparent tardiness cost (ATC), earliest due date
(EDD), first in first out (FIFO), and weighted due date (WDD). <br><br> It was found that the runtime of the LPH and the runtime of the IP are roughly the same.
However, the experimentation has shown that the IP runs quickly, meaning that it can be practically applied in an industrial setting. Furthermore, the LPH has
revealed the structure of the problem, which indicates that Pm|pj=p, rj| sum(wjTj) could be NP-hard. The IP improved overall gross profit by 27.31% on average.
<br><br>
Awards won at the 2006 ISEF
Award of $500 - American Association for Artificial Intelligence
Second Award of $1,500 - Team Projects - Presented by Science News
2006 - CS017
BUILDING A POWER-OPTIMIZED MIPS PIPELINE
Vidya Ganapati
Sunset High School, Portland, OR, USA
In an era of embedded systems, laptop computers, and growing energy consumption, power-aware computing is becoming increasingly important. However,
while highly specialized, energy-optimized chips may be designed for embedded systems, designing entirely new instruction set architectures for all-purpose
computers is not immediately practical because the compilers and software for the computer would have to be completely re-designed. The goal of this study is
to re-implement the processor pipeline without changing the instruction set architecture to gain power consumption savings that exceed performance cuts. In
this study, an instruction-dependent power and performance calculator for an open source, cycle-accurate MIPS R2000 processor core simulator was designed
using SystemC. The processor and calculator were tested with program traces and verified with waveforms. The division of this processor core into pipe stages
was then theoretically modified to test the effects of pipeline implementation on performance and power consumption. A theoretical framework for a novel,
dynamic power-optimized MIPS pipeline was established as a result of this work. Later work will focus on designing a cycle-accurate simulator and
power/performance calculator for the dynamic MIPS pipeline.
Awards won at the 2006 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
Second Award of $500 U.S. Savings Bond - Ashtavadhani Vidwan Ambati Subbaraya Chetty (AVASC) Foundation
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2006 - CS309
USING EULERIAN MODELS TO DETERMINE INITIAL CONDITIONS OF SEVERE WEATHER
Gwyneth Rhiannon Glissmann, Robert Vaughan Glissmann
Peak to Peak Charter School, Boulder, CO, USA
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
This project was conceived from a Scientific American article which proposed that severe weather could be diverted or diffused by changing its initial
conditions. Using this premise, programs were developed with C++ that allowed different weather components to be calculated over time from initial conditions
provided by the user. Models for pressure gradients, convection, and conduction were initially created in two dimensions and then evolved into three
dimensions.<br><br> The programs employ Eulerian grids composed of “cell” objects. Each cell retains its own values of temperature and pressure, and on set
timer intervals interacts with neighboring cells to determine subsequent values. The use of objects in the modeling programs provides several advantages
including the capability to add functionality without significantly modifying program architecture. The models were first implemented in two dimensions in order
to develop and refine algorithms that simulated weather mechanics.<br><br> The processing power required to model a three-dimensional simulation is usually
performed on enterprise-class computers, and easily surpasses the capability of laptop computers. To address this issue, Microsoft’s DirectX was used for the
graphics portion of the model because it takes advantage of the PC's Graphics Processing Unit, or GPU.<br><br> Finally, using the GPU for actually
performing weather model calculations became interesting after observing it's powerful capabilities. A novel method was developed to use the GPU to multiply
large matrices, a frequent calculation required in models. Using this algorithm, the number of matrix calculations achieved per second was significantly higher
than the number achieved by the computer's CPU.
Awards won at the 2006 ISEF
First Award of $1,000 - American Meteorological Society
Scholarship Award of $20,000 - Department of Homeland Security, University Programs Office
2006 - CS048
MUSICAL INSTRUMENT RECOGNITION ON MONOPHONIC MUSICAL PHRASES
Huseyin Alp Gurkan
Galatasaray High School, Istanbul, Turkey
Musical instrument recognition is the study of building a computer system capable of “listening” to a piece of music and recognizing which instrument is being
played. It is also highly related with other sound source recognition applications, such as speaker (human voice) recognition. Our instrument recognition
algorithm builds a single model for each instrument from various monophonic musical phrases recordings. These models, consisting of the most general
spectral features of the sounds generated by instruments, allow the instrument identification of the new data, from a given short sample. Also a new
fundamental frequency estimation algorithm working on frequency domain is developed as a pre-requirement of the instrument recognition system. The sound
material that we used as training and test data was consisting of unaccompanied instrument music recordings for 8 instruments, collected from diverse
commercial CDs, so this quite large variation represented the conditions of potential practical applications. In our performance tests, the overall recognition
accuracy was significantly increased with longer test durations and we obtained acceptable performance (70% or more) with 4 second or longer test durations.
Awards won at the 2006 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
2006 - CS021
LINUX CLUSTERS AND PARALLEL PROCESSING: THE POWER OF MANY
Taylor Edward Helsper
Catholic High School, Huntsville, AL, United States
Computing clusters and parallel processing has the ability to divide problems and send sections of the problem to other processors in an effort to complete it
faster. My goal this year was to create a working Linux cluster and benchmark its efficiency as well as measure any overhead and limitations.<br><br><br>I
predict that the increase in speed will be comparable to the total number of nodes(computers) in the cluster.<br><br><br>This project uses a version of Linux,
called ParallelKnoppix. Benchmarks were run to test how parallel processing works and how efficient it is. The computers were clustered together and set up
using a DHCP Server that was run on the host node booted from a Live CD. The server was started and the slave nodes booted using PXE, or Pre-boot
Execution Environment. This allowed the server to send a boot file to the slaves and assigned them an IP address. Once the server was set up, MPI, or
Message Passing Interface, was executed so the master can communicate with the cluster. Two benchmarks were run, Bladeenc which converted audio files
and the High Performance Linpack(HPL) benchmark which solved matrices in parallel.<br><br><br>The Bladeenc encoding tests showed a significant increase
in speed as the number of nodes increased. The HPL benchmark also showed a significant increase in gigaflops as nodes increased. These results prove my
hypothesis correct; that clustering can increase performance without an increase in cost.<br><br>
2006 - CS013
A LOOK INTO COMPUTER SYSTEMS: DETERMINING WHAT FACTOR HAS THE MOST INFLUENCE ON TOTAL COMPUTER PERFORMANCE
Bakari Benjamin Hill
Frederick Douglass High School, Atlanta,GA , United States of America
With this experiment I plan to look further into computer systems with a look into the original study areas but also adding in the Central Processing Unit as a
testing area. In the area of motherboard testing studying what the Northbridge and Southbridge control and effect also a look into PC attachment connections,
and also the different interface card and performance there within. Also a new area is Video Processing and in the new area of video gaming and graphics
rendering now I a new time of gaming and graphics rendering that is now an important part of working computers. In the case of memory I am now taking a look
at dual channel vs. single channel which also for better performance data transfer also higher speed memory and data latency. Also determining what parts in
what setup yield the greatest results.In Constructing a Computer System (Beware of Static Electricity) Mount the Power Supply’s, Motherboards, Hard Drives,
CD Drives and Floppy Drives in the Cases Mount the processors to each motherboard Mount the fans to each processor Mount each individual stick of memory
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
to a different motherboard Connect IDE cables to the different components on the motherboards (Hard Drives, Floppy Drives, CD Rom Drives) Connect power
cords to the different components and power the system up Install the Windows Operating System to Install the benchmarking Software Run the install wizard
of each benchmarking application (SI SANDRA, 3D Mark, Performance Test) and follow the steps. To Run the benchmarking Software Double click on the links
for the programs either on the desktop or in the start menu and follow any instructions.<br><br>
2006 - CS039
THE EFFECTS OF REFRIGERANT-COOLED OIL SUBMERSION ON THE PERFORMANCE OF A CENTRAL PROCESSING UNIT IN A COMPUTER
Brock Anderson Hinig
Tuscarawas Valley High School, Zoarville, Ohio, USA
The cooling effectiveness of oil submersion on the performance of a computer was studied. For the test, an entire computer system was built and submerged in
oil. From research, it appears as though this is a unique approach to cooling the complete computer. <br><br> By using a refrigerant-based pump and a freezer
coil, extremely low oil temperatures could be maintained even while the PC was running at overclocked speeds. This innovation made it possible to obtain
substantial performance gains, without actually having to overclock the chip. This is a new discovery; the colder the chip, the faster it runs. The opposite is also
true; the hotter the chip, the slower it runs. In normal air, performance is dismal compared to the almost freezing temperatures obtainable in the oil.<br><br> By
testing with popular benchmarking applications, such as PC Mark ’04, it was proven that the colder the chip, the faster it runs and that oil submersion of the
complete system offers many benefits.<br><br>
Awards won at the 2006 ISEF
$5000 per year for four year scholarships. - Indiana University-Purdue University Indianapolis
Award of three $1,000 U.S. Savings Bonds, a certificate of achievement and a gold medallion. - United States Army
2006 - CS032
A STUDY ON THE DESIGN AND IMPLEMENTATION OF AN ARTIFICIALLY INTELLIGENT CONTROL SYSTEM
Sagar Indurkhya
The North Carolina School of Science and Mathematics, Durham, NC, USA
This project deals with the design and implementation of control systems for bots in a virtual simulation program, consisting of a virtual species that learns to
adapt to a hostile, dynamic and evolving environment, and whose members are differentiated only by their control system. The project tests whether a genetic
algorithm can successfully evolve the weights of two neural networks which evaluate a tree path for a domain for which an optimal solution cannot be
determined. Thus training is indirectly a result of long term reinforcement learning. If the model proposed is successful, it will help researchers in constructing
larger, generic models that may eventually lead to new technologies with applications in nanotechnology, autonomous military vehicles, computer networking,
and autonomous automobiles and transit systems.<br><br> Control systems are composed of two neural networks: the prediction network, a recurrent neural
network that employs a derivative of the standard back propagation algorithm, and a situation analysis network, a feed-forward neural network trained by a
genetic algorithm, enabling the evolution of a species of bots. Together, the two networks generate a large look-ahead tree, and evaluate it. The control system
is a means to employ time series analysis.<br><br> Four different architectures for the prediction network were tested and the bots were found to behave
initially as predicted, developing elementary adaptations in response to low level stimulation and displaying motor skills development.<br><br> Current
research is directed towards developing customized simulations to further develop applications and studying biological network architectures.<br><br>
Awards won at the 2006 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
First Award of $3,000 - United States Air Force
2006 - CS045
IMPACT STUDY OF DIFFERING TRAFFIC CONTROL SYSTEMS
Caine Landreth Jette
Maui High School, Kahului, HI, USA
We deal with traffic every day of our lives, whether we're going to school, going to work, or going to church. Traffic is an intrinsic property of industrialization and
urbanization, and thus our only options are to either accept it for what it is, or learn to minimize its effects and still lead healthy and productive lives. The stress
that accompanies traffic has influenced people enough to change their lives, and these changes are completely unnecessary, self-imposed modifications of
behavior.<br><br>This project was designed and implemented to ultimately prove the above statement. Our lives should not be subject to change simply
because of traffic. Through the use of computer simulations, the efficiency of traffic control systems can be checked and modified; real-time recoding of the
phase timing parameters of a simulated traffic control system is now possible with the development of technologies geared for such purposes.<br><br>The
Hina Avenue and Lono Avenue intersection was calibrated and left untouched for nearly eight years. Using miniTraff, an open-source traffic simulator, I was
able to come up with theoretically more efficient phase timing parameters for the lights governing this intersection. The next step is to actually implement these
new parameters and see how well they work in reality.
2006 - CS047
EFFICIENCY OF PROGRAMMING LANGUAGES: A GENERATOR/SOLVER APPROACH
Jorge Alberto Jimenez
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Franklin High School, El Paso Texas, United States
This project serves the purpose of introducing in greater detail the specifics of the Java and C++ architectures, compilation processes, and their effect on
performance.<br><br> The project consisted of developing and implementing a maze generator and solver in order to determine how different implementations
of Java compared to C++ in processing speed. It was hypothesized that due to the nature of C++'s binary implementation, it would be more efficient than both
Java’s implementations.<br><br> Results showed a significant advantage of C++ against the traditional implementation of Java. However, a very small range
existed between C++ and a non-interpreted implementation of Java known as “Just-In-Time.” Results over various trials consistently showed an almost equal
performance in all but one of the criterion. This last criterion attempted within its scope to disprove the notion that faster operations equal more efficient problem
solving.<br><br> It can now be concluded that efficiency in programming languages is derived from the context in which it is being implemented. Furthermore,
it was concluded that though the performance of Java JIT is comparable to C++, the two are very different languages and issues such as portability and
hardware control become the deciding factors in determining use. Moreover, the artificial intelligence component raised the question as to what factors make
problem solving more efficient.<br><br> <br><br>
2006 - CS031
POWER DEMON: IMPROVED THROUGH SOFTWARE
Sierra Marie Kahn
Park Vista High School, Boynton Beach, Florida, United States
To improve the Power Demon through better load sharing algorithms. The Algorithms will be designed to be more practical for the appliances used in a home
during a power outage. The hypothesis is, if an embedded program is developed to switch periodic loads on a generator, then the power output can be used
more effectively, permitting more items to be powered.<br><br> The materials the scientist needed were The Power Demon, Microsoft Visio 2000, Microsoft
Visual C++ 6.0, computer miscellaneous house hold appliances, and extension cords. The scientist then tested the program by downloading the program into
The Power Demon. The scientist started Test 1 by plugging in electric can opener, microwave, and toaster into Power Demon, making sure they are in an on
position attempting to draw current. The scientist then plugged The Power Demon into a wall outlet. The scientist monitored and recorded the order in which
things turned on. Next the scientist started Test 2 by repeating Test 1 but while appliance 2 was on, the scientist put appliance 1 back in the on position. Finally
the scientist then monitored and recorded the order in which things turned on again.<br><br> When the program was run, The Power Demon was able to hold
more electrical appliances.<br><br> The scientist's hypothesis was proven true. The Power Demon would allow a generator to work more effectively.
2006 - CS317
INTELLECTUAL DATA-ANALYTICAL FORECASTING SYSTEM "UNICUM"
Artyom Karelskiy, Purgin Dmitriy
Almaty Technical lyceum #165, Almaty, Kazakhstan
Nowadays the analytical technologies are used to solve bad-algorithmizing problems and problems that could be hardly or cannot be solved using classical
mathematics. Analytical technologies are the principles based on some models, algorithms, mathematical theorems that allow estimating the values of
unknown characteristics and parameters using known data. Their basis is artificial intelligence technology imitating human brain’s neuron activity. The first
commercial releases based on it appeared in 1980s and were widespread in developed countries. Different “fuzzy” problems such as pattern and speech
recognition, evaluating regularities, classification, and forecasts are successfully solved by means of neural networks. In such problems where tradition
technologies are powerless neural networks often are the only effective solution methodic.There are a lot of algorithms used to teach neural networks, but there
are no sharply defined rules of forming their structure, that’s why it’s impossible to solve different problems using the same network configuration, even if these
problems have minimal distinction. The investigation purpose of the offered project is to develop a mechanism of autoconfiguring the neural network structure
and to create a program application allowing automatically choose the optimal network structure for solving the problem put by. In the course of research much
theoretical material was used, lots of practical experiments was made. The results of the research provided a basis for developing the original algorithm of
forming the structure of neural networks based on known facts of learning samples only. During the work the system was tested on solving problems of the
different areas of human activity: economics, medicine, heat-and-power engineering, marketing research. All the tests resulted in satisfactory outcome (top
forecast error 6%).
Awards won at the 2006 ISEF
Fourth Award of $500 - Team Projects - Presented by Science News
2006 - CS043
NETWORK MASTER 2.0 PROFESSIONAL
Miroslav Kosik
Private Secondary Grammar School specializing in programming, Slovakia, Kosice
I am a student of the 5th grade at Private Secondary Grammar School specializing in programming in Kosice in Slovakia and I am 15 years old. Programming
is my big hobby so I decided to make a project on networking. My project is called Network Master 2.0 professional. It is a program set for administration,
communication, remote control and data transfer among computers in LAN. The administrator can provide service controls, installations, computer setting and
control. It is a program of the client-server type. It provides many control elements for the administrator like powering, information about client and the
computer, network print, data transfer, remote control, remote computer setting, web pages and application blocking, remote task manager and many more.
The program saves data about users logging in database available for the administrators. Communication among clients is provided by network chat and data
transfer. The program uses auto remote pc search. All data are enciphered by algorithm MD5 and limited by time validity until they are sent and saved in
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
database. Network Master is set for the use in os Windows because it uses its libraries. The used programming language is Visual Basic 6. The program can
be automatically updated and all new versions are accessible on the internet. It provides many new functions for administrators to make the remote pc control
easier. In the future the program can be used also for service controls and repairs in WAN and trough internet.<br><br>
2006 - CS023
COMPUTER ANALYSIS OF THE PARLIAMENTS STRUCTURES
Andrey S. Kravchenko
Lyceum of information technologies #1533, Moscow, Russia
Political scientists, historians and economists are interested in analyzing different parliamentary structures, and in explaining and predicting parliamentary
members’ behavior. The objective of this project was to implement the software toolkit assisting these experts in the analysis process. The analysis is
performed using the interactive visualization module. <br><br>The analysis module makes it possible to evaluate the relative power of different parliament
factions based on the number of coalitions in which those factions play a key role. Various mathematical indexes are used to analyze coalitions. The political
position of each deputy gets characterized on a plane where the axes can be easily interpreted as various political dissimilarity factors (for example, “liberalconservative”, “reformist-antireformist”). Thus we can build “political preferences maps”, where every member of the parliament is represented by a separate
point. Judging by political views of the deputies from the same faction, position of the whole faction can be evaluated. The change in faction’s position can be
visualized as time-dependent process; thus the expert can evaluate this faction’s political stability. Cluster analysis can be performed to view the groups of
deputies with similar political positions. This approach helps to explain historical results in the past and predict results of the parliament votes in the future. <br>
<br>The toolkit can employ various parliamentary databases. It was used to study the first Russian parliament (Duma, 1907 –1912) and the modern Ukrainian
parliament (Rada). The direction of future research is the analysis of processes in other countries’ parliaments as well.<br><br>
2006 - CS035
FUZZY LOGIC BASED MUSIC COMPOSITION TOOL
Hyeonho Lee
Gongju High School, Gongju-si Chungcheongnam-do, South Korea
Artificial Intelligence (AI) is a field of imitating human thinking process and applying it for various purposes. There have been many researches in AI to
understand and utilize the human thinking processes. Among them, art such as music is one of the high-leveled thinking processes that is hard to quantify or
formulize. In this work, we deal with music composition with AI methods to understand the music composition by human. Based on J. S. Bach's Musical
Offering, which composes music based on its base melody, we propose an approach to imitating music composition process. We apply Fuzzy Logic to reflect
fuzziness of human thought and Expert System with Min-Max composition to imitate the selection process of human thinking as well as rules of music
composition. We have developed a prototype tool. Our tool is expected to be used as a starting point of understanding high-leveled human thinking process.
2006 - CS015
FACOOL: CONVENIENT INTERNET FACE RETRIEVAL SYSTEM
Liu Liu
Shanghai Datong High School, Shanghai, P.R.China
Nowadays, Google, as a text search engine, exists everywhere. However, when we talk about the tomorrow's search engine, more and more people think that
multimedia search technology is essential. Facool (www.facool.net), a search service on the Internet may be thought as a draft of the next generation of search
technology.<br><br> Facool's goal is to retrieve a person's identity via the Internet with only one face image. Distinguished from other systems that only
retrieve images, Facool returns not only the correct images but also the corresponding name to users. In fact, Facool mines the relationship between faces and
names by NameRank algorithm which compares the contents of webpages related to the face and then gives a proper rank denoting the name's grade-ofmembership to the face.<br><br> To be a search engine, the most important problem is to retrieve the huge information on Internet rapidly. To solve this
problem, a novel face index method based on gray-level referring was developed. Through the experience, retrieving by the new method could be three times
as fast as normal.<br><br> As Facool is an Internet service, a crawler program is launched to capture pictures and webpages over the Internet. An http server
is provided also, so that users can just point and click to find what they want.<br><br> At the end of this project, Facool was evaluated on Face Database of
CBSR. As the result shows, it always feed back proper ones in top 5 items from more than 10 thousand images in less than 50 milliseconds.
Awards won at the 2006 ISEF
Award of $500 - American Association for Artificial Intelligence
Third Award of $300 - Association for Computing Machinery
2006 - CS310
PDA BASED GUIDE
Ilya D. Lopatin, Yury V. Kuzin, Alexander M. Nazarov
Centre of education Tsaritsino #548, Moscow, Russia
Goal of our project is to design a new type of portable educational interactive multimedia system and new method of interaction with audience. For example
museum guide in our case.<br><br>We had created the prototype -games as “Polytechniceskiy museum” guide for team and single-player modes and tested
the prototype with actual visitors of museum. After analyzing the prototype we decided to enhance it with local navigation system. Then of GPS, Wi-Fi,
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Bluetooth, RF-id, IR-port, OCR code reader technologies as guide to navigation system where researched and analyzed. The next steps where creation of
software for PDA to control and interact with navigation marks and final test run of complete system with PDA, IR-labels and software.<br><br>By analyzing of
navigation systems first of all IR-technology was chosen for museum navigation and microprocessor PIC-12F629 programmed with assembler was used as a
controller for navigation marks.<br><br>Creation of software to control navigation process was formed by development of Dijkstra`s algorithm based program
to create the data base of the shortest navigation ways and creating the Flash ActiveX component as a shell for playing data files.<br><br>In summary our
project can give alternative to traditional human guided group excursions and to existing audio guides, it is a portable navigation system (for example of
museums or national parks), it gives extended and structured information about expositions and bounded resources. Also our guide is friendly to visitors with
limited abilities and even can support the visitors who are not familiar with local language.<br><br>
2006 - CS026
MD-OS OPERATING SYSTEM
Denis V. Mandrov
School #117, Omsk, RUSSIA
Hundreds of various operating systems for computers have been designed and distributed throughout the world during the past decades. Some of them are
commercially successful and cumbersome; some are elegant and special-purpose only.<br><br>The key challenge in this project was to write OS from scratch
in X86 assembly language (NetWide Assembler, NASM, was used). The key objective was to elucidate the inner structure of modern operating systems
implementing the mainstream functions and keeping LOC count as small as possible. X86 instruction set ‘core’ is here to stay, and one can suppose that using
MD-OS as sandbox-style OS can help students and professional engineers.<br><br>MS-OS is 32-bit protected mode multiprocessor operating system. Its
source text contains about 20,000 lines of code. Virtual machine approach is used to run platform-specific BIOS code. The System is implemented as reentrant
code; it is ACPI-complaint, supports preemptive multitasking, symmetric multiprocessing and Intel Hyper-Threading technology. Minimal system overhead is
attained for time-critical functions. Regular WIMP style GUI is offered; additional drivers for graphics cards are not required.<br><br>MS-OS can be
conveniently used for network services implementation. It supports Ethernet networking (e.g., parallel computations in clusters); protocol stack configuration is
handled using DHCP. The system can boot from all modern media, network boot is also supported.<br><br>“Thin client” applications as well as embedded
system code can be developed using MD-OS<br><br>
Awards won at the 2006 ISEF
Third Award of $350 - IEEE Computer Society
First Award of $200 - Patent and Trademark Office Society
2006 - CS018
COMPILER DESIGN
Kristopher Kyle Micinski
Decatur High School, Decatur, Texas, USA
The goal of a compiler is to take input in a source language and produce output in a target language. In computer science terms, a compiler will convert a high
level programming language into a lower level language, usually one that the machine can execute more easily. Using different methods of compiler design, a
given input program could have various output programs that produce the exact same result. This project tests how different methods of compiler design can
affect the target code and the compiler itself. Though the project considers how design methods affect target code efficiency, code optimization is not the main
goal. That is to say the compiler focuses more on design issues than implementing many optimizations. However optimization is implemented when the
compiler architecture makes optimization advantageous. As compilers are usually separated into distinct parts, the project also tests how interactions between
stages of the compiler can be helpful in compilation. Tests on three different types of back ends which produce different target code are conducted, as well as
tests on two different types of front ends. Stack based code generation methods, Register based code generation methods, and more complex basic block
code generation methods are tested. The project also tests how different types of parsers can produce desired output, recursive descent parsing and tail
recursive parsing methods are tested. The project presents the results of compiler architectures and their affect on many input programs for the purpose of
making conclusions about how compilers can be designed.
Awards won at the 2006 ISEF
Second Award of $500 - IEEE Computer Society
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
Second Award of $150 - Patent and Trademark Office Society
2006 - CS038
ALEKHINE: COMPUTER PROGRAM THAT RESTRUCTURES USER ACCESS TO SOFTWARE COMPONENTS AND FREE BSD OPERATIVE SYSTEM
PROCESSES
Jorge Andres Morales Delgado
Centro Educativo Bilingue del Caribe, Limon, COSTA RICA
The user normally performs common tasks using some system interface, for example formatting a diskette. <br><br>The problem is the difficulty and
uncertainty cause by the interfaces known today, this gave rise to the idea of developing a stable, effective program capable of reducing this difficulty and
facilitating multiple methods for combining the different programs. <br><br>The program works as a layer that is placed over the operating system to create a
new method for utilizing these systems that require basic knowledge that the average user does not easily acquire.<br><br>The first step in designing the
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
algorithm was to write the code in C programming language. The program performs tasks that complement the interfaces. The final step is the compilation of
the code.<br><br>The results exceeded expectations; the interfaces were compatible in four operating systems, demonstrating the great versatility as a result
of being designed in C. The program is totally functional in Free BSD and is 90% usable in Slackware Linux.<br><br>
2006 - CS019
JAVA BASED ALGORITHM AUTOMATED PRESCRIPTION REMINDER & RENEWAL
Nadia N Naja
Dearborn Center for Math, Science & Technology, Dearborn, MI, USA
As the life spans of individuals continue to increase the need to deal with elderly health issues through the use of prescription medications has become the
norm. The purpose of editing the software, created in last years exploration, is to add increased help for the patients, based on their feedback, and to create a
link from patient to pharmacy to quicken script renewal. It was hypothesized that if the programmer was efficient in administering surveys to the target client
group then the programmer could amend, based on information extracted, the software, RxAlert, to better suit their needs as well as the needs of the
prescription distributor. The application reminds a person with the continuous playing of an audio sound when the time has come for a medication to be taken,
reducing the risk of under and over medication. Furthermore, RxAlert connects a server between the patient and pharmacy computer creating a direct
connection that cuts refill time dramatically. Development of the program was completed with the assistance of JGrasp, Java SDK, and data complied
throughout the exploration. Using the four-step software developing method, RxAlert editing was easily completed since the changes could be completed and
tested in an organized manner and opportune fashion. Additionally being an adaptable platform, Java can be uploaded onto a variety of hardware units. Thus,
permitting RxAlert to be formatted and uploaded to cell phones and handheld organizers with ease. Examination validates that RxAlert’s amendments and
additions during this refinement process create an application which informs efficiently for both patient and pharmacy.
Awards won at the 2006 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
Second Award of $1,500 - United States Air Force
2006 - CS029
DISPLACEMENT MAPPING BASED MESH SIMPLIFICATION FOR REMESHING
Muhammad Sadan Nasir
ADP IQRA, Quetta, Pakistan
Rendering meshes with complex geometry features is a common problem in graphics visualization field. Processes like mesh simplification and remeshing are
used to simplify mesh or change structure of irregular mesh to semi-regular. Drawback of using process of remeshing is storage of parameterization data after
achieving base domain from mesh simplification process. The purpose of this research was to develop a method of mesh simplification, which can represent
the parameterization data as scalar values or displacement map. For this purpose different mesh simplification and parameterization techniques were tested
and a method was developed which can simplify a mesh and can store parameterization data as scalar values. These scalar values then can be turned into the
displacement map for efficient remeshing or real time hybrid levels-of-detail mesh rendering.
2006 - CS049
CREATING A LEUKODYSTROPHY DATABASE USING FILEMAKER PRO 7
Farre Nixon
Pleasant Grove High School, Texarkana, TX, United States of America
The leukodystrophies are rare diseases characterized by the dysmyelination of the white matter in the brain. These diseases include Adrenoleukodystrophy,
Alexander disease, Krabbe disease, Metachromatic leukodystrophy, Pelizaeus-Merzbacher, and Canavan disease. Creating a database that compiles
resources relating to the diseases increases research efficiency. After gathering related resources to each leukodystrophy, the references were organized into
an excel spreadsheet in which the datbase fields could be distinguished. Using Filemaker Pro 7, the database was constructed, first by establishing tables and
fields. After creating relationships between associated records, the individual page layouts were designed, and the information for each disease was entered.
Subsequently, a web-interface was designed, allowing for public access to the datbase. The database eliminates repetitive and irrelevant sources from
research articles regarding genetic information pertaining to the leukodystrophies, and by incorporating a search query and reference page, it allows the user to
gain access to specific sets of information.
2006 - CS042
SIMULATING AN INFINITELY LARGE GRID WITH THE GAME OF LIFE
Eric Grey Oetker
Arkansas School for the Mathematics, Sciences, and Arts, Hot Springs, AR, US
The physical limitations of a single computer’s processing and storage capabilities have always created an obstacle when attempting to solve large problems.
While a single computer cannot contain an infinite amount of resources, continually adding computers into a system as required could provide an infinite
amount of resources if enough computers are available and the system implements an algorithm to successfully manipulate, store, and recover the data. The
purpose of this project was to determine whether a program could be developed to simulate an infinite grid by handling the manipulation and storage of a grid
between multiple computers using Conway’s Game of Life as an application. After an algorithm was created and turned into a program, a the program was
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
tested using various numbers of clients along with configurations that differed in rate of growth. The data was then analyzed and a ratio was noted to exist
between the amount of clients the system utilized and the final turn the system progressed to. It was then determined that the total resources available varied
directly with the number of clients connected. As more clients are added to the system, the program can support and process more information. Because of this
relation between available resources and number of clients, the program could theoretically be scaled to infinite proportions providing network and client
resources were available. As a result, it was determined that the program did in fact simulate an infinite grid.
2006 - CS030
TWO EYES VISION MOTION DETECTION, TRACKING AND 3D POSITIONING OF AN INTRUDER
Raphael Ouzan
Boys Town Jerusalem, Israel
This project presents a new approach of computer vision. It aims at building algorithms that imitate the way the brain works and applying them to an advanced
security system. This security system is composed of two cameras positioned 7 centimeters apart and connected to a computer in order to imitate human eyes
and brain. The security system is able to detect, track and compute the 3D position of an intruder in real time in addition to using a robot to constantly direct a
laser at the target. <br><br> Unlike a typical computer algorithm, in this research an image is not considered as a two dimensional array of pixels. Instead of
pixels, vectors of motions are compared between frames to detect motions. It provides not only a more accurate detection, but also the direction and the
velocity of the motion. Thus, the place of the intruder in the subsequent frames can be predicted. This is what the brain does, for example, when catching a ball
thrown in the air.<br><br> The application is implemented in C++ according to a strong Object Oriented design leading to high modularity and reusability. <br>
<br> This project proves that furthering our understanding of the human brain is a promising path by which to progress in both vision and sensory emulation.
Moreover, this innovation enables highly advanced applications in numerous fields such as auto-targeting weapons, surveillance, medicine, industry and
autonomous robots.<br><br>
Awards won at the 2006 ISEF
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
2006 - CS004
CREATING AN ONLINE STUDY SESSION NETWORK
Daniel Jonathan Parrott
Bartlesville High School, Bartlesville, Oklahoma, United States
Universities and other institutions are beginning to use software to collaborate online. In its present form, this kind of discussion is largely limited to a forumbased network. However, there are some advantages to real-time collaboration. Certainly, students already know how to take part in “chatroom” conversations
and instant messaging, but this is not done via a synchronized network that all students access. The idea, then, is to enable academic institutions to operate
their own private collaboration network, complete with instant messaging, file transfers, study session rooms, virtual chalkboards, moderated discussions, and
online access to previous sessions. “basiCommunication v3.0” implements all of these features. Students can now use the client program to create and join
their own online study sessions, hosted on a server that their organization manages. As it gains in popularity, the network becomes a central hub around which
news and other announcements can be made. It is capable of hosting over 4000 simultaneous users, up to 200 rooms, and 20 users per room. Having tested
an implementation of the network at Bartlesville High School during a four-month period, the software has shown itself invaluable to students as they prepare
for tests and plan upcoming events.
Awards won at the 2006 ISEF
First Award of $700 - IEEE Computer Society
2006 - CS020
VIABILITY OF AN ALTERNATIVE LINKING ALGORITHM
Jonathan J. Pezzino
Nicolet High School, Glendale, Wisconsin
A small world network is a decentralized system consisting of a lattice of interlinked nodes that discretely alter their states based on the states of the nodes to
which they are linked. Such networks have myriad applications, from neurons in the brain to computers connecting to the Internet, to spread of information and
disease in human social networks. It is often useful for networks to coordinate, where every node in the lattice assumes the same state. One factor affecting
coordination is the topology of links between nodes.<br><br> This study developed an algorithm that generated an alternative linkage topology between the
nodes in the lattice of a small world network. This alternative linking algorithm generates link topologies that more realistically reflect real-life social networks,
incorporating both spatial proximity and characteristics of nodes as deciding factors in how nodes are linked.<br><br>Networks linked using the alternative
linking algorithm and a standard linear linking algorithm were tested for their tendency to coordinate using software written by the researcher. The total number
of times out of 1000 trials that a network coordinated was measured using a metric known as the density classification task.<br><br>Results showed networks
linked with the alternative linking algorithm coordinated a maximum of 27 times per 1000 trials, while networks linked with a standard linear linking algorithm
coordinated an average of 107 times per 1000 trials. This study concludes that an alternative linking algorithm is not a viable option for increasing coordination
in a small world network.<br><br>
Awards won at the 2006 ISEF
Tuition Scholarship Award of $5,000 per year for 4 years for a total value of $20,000 - Indiana University
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2006 - CS302
COMBINING TOUCH SCREEN AND SCANNER INPUT FOR HANDWRITING RECOGNITION
Alan Garrett Pierce, Nathaniel John Broussard
School of Science and Technology, Beaverton, OR, USA
Our goal is to design and program a software program that can recognize handwritten letters and words. Our project is a continuation project from last year, but
it has a different focus. Our goal this year is to find in what ways the program can become more convenient and accurate when recognition using data from a
touch screen, called on-line recognition, is combined with recognition using data from a scanner, called off-line recognition.<br><br> By combining on-line and
off-line recognition systems, we have made the program fundamentally more convenient than other handwriting recognition programs in several ways. In
addition to convenience, though, this combination also allows for greater accuracy; our system has a fundamental advantage over entirely off-line systems
because it is able to get data from a touch screen that is very hard to reliably get from an image.<br><br> Our program has been greatly improved since last
year. We restructured and improved many parts of the program, we added a graphical interface, and we made a system for recognizing touch screen data. As
long as the letters are correctly divided, the on-line handwriting recognition is very accurate. The program can also recognize images more accurately than last
year because they are partially recognized using data from a touch screen. Because our program has a higher potential for convenience and accuracy than
standard handwriting recognition programs, our project successfully shows the potential that can be unlocked by combining off-line and on-line handwriting
recognition.
Awards won at the 2006 ISEF
Award of $500 - American Association for Artificial Intelligence
2006 - CS028
A NOVEL APPROACH TO THE AUTOMATIC RECOGNITION OF EMOTIONS IN NATURAL SPEECH
Caroline Elizabeth Pietsch
Ossining High School, Ossining, NY, USA
Previous studies on the recognition of emotions in speech have analyzed segments of acted speech, which differs from the way humans speak in our everyday
lives. This study describes a novel approach to emotion recognition, utilizing a decision tree trained on the prosodic features of natural speech. 286 segments
of the MALACH database were assigned to one of four emotional categories used in this research: anger, happiness, sadness or neutrality. Weka, a collection
of machine learning algorithms, was used to automatically train decision trees based on different combinations of prosodic features derived from these speech
segments. Decision trees were tested on previously set aside testing data. An accuracy rate of 45 % was achieved when determining which of the four
emotions a segment expressed. An 80% accuracy rate was obtained when determining whether or not a segment expressed happiness, and an accuracy rate
of 78% was attained when determining if a segment expressed anger. This research has practical applications in the field of human-machine communications,
and could prove especially effective in improving interactive voice response systems.
Awards won at the 2006 ISEF
Award of $500 - American Association for Artificial Intelligence
Honorable Mention Award - American Psychological Association
Award of $1,000 - Acoustical Society of America
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
2006 - CS024
STATISTICAL-BASED ADAPTIVE BINARIZATION FOR DOCUMENT IMAGING
Nat Piyapramote
Sarasit Phithayalai School, Ratchaburi, THAILAND
Camera phones and hand-held digital still cameras have simplified the recording of image-based information. However, due to lighting condition, focus,
shadow, and many other factors, the captured image is too degraded to be used in other applications such as Optical Character Recognition (OCR).<br><br> I
proposed a new binarization technique based on region splitting with skewness coefficient measured by mean, median, and standard deviation as a criterion.
The edge information, obtained from the difference of gaussian technique, is used to determine the text region. An improved version of Otsu's algorithm based
on considering the sign of skewness coefficient, which can better handle text varying in color, is then used to thresholding the text region. In addition, the
proposed technique has been compared with other methods e.g. Hui-Fuang, Niblack, and Sauvola. The test images were captured by digital cameras at 72 and
120 dpi with size of 1600x1200 pixels using macro-mode.<br><br> The error rate was determined by measuring Levenshtein distance between ground truth
data and recognized text. After conducting the experiment with many document images, the proposed binarization method improves the recognition from
camera captured document image. Experimental results show that this allows decreasing the error rates of recognition from 40-90% to 20%.
Awards won at the 2006 ISEF
Award of $500 - American Association for Artificial Intelligence
First Award of $1,000 - Association for Computing Machinery
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2006 - CS025
PROGRAM TASK DISTRIBUTION, PERFORMANCE, AND SCALABILITY OF A KNOPPIX - OPENMOSIX CLUSTER
Jean-Louis Rochet
Colegio San Ignacio de Loyola, San Juan, PR, USA
In the continuing search for increased performance in powerful computing systems, computer clustering stands as the most innovative and practical way to
achieve this goal. Universities and research institutions alike have turned to this method of supercomputing because of its low cost, high scalability and future
potential. The purpose of this investigation was to evaluate and analyze the cluster scaling feasibility, program task distribution and performance of a Knoppix openMosix cluster of up to 4 nodes. The following was hypothesized: openMosix will be able to efficiently distribute tasks generated by PVMPOV among the
assigned nodes; A general speed-up trend will be observed with each size increase of the cluster; The speed-up factor will gradually reduce with the addition of
each node, and; Extrapolations based on the benchmark test data will predict a speed-up peak in relation to cluster size. It was concluded based on the results
of the experiment that all of these hypotheses were correct except for the one that predicts a gradual reduction in the speed-up factor, since it was slightly
erratic. It was also concluded that an uncontrolled variable affected the results in a small way. This variable appears to be an openMosix technical limitation,
which gives clusters a small advantage over single nodes. This phenomenon did affect the ability to make accurate extrapolations based on the test data, but in
general the hypotheses were effectively tested. As an application of this investigation, Knoppix – openMosix solutions can be used in the fields of graphical
rendering, simulations, and demonstrations.
2006 - CS312
QUICK-LAC
Yanet Rojas Caracheo, Angélica Arias Gallegos, Gabriela Piña Ortiz
Colegio de Estudios Cientificos y Tec. del estado de Gto., Cortazar, Gto, México
After apply interviews and inquiries in different technologic high schools, we noticed that in the carriers that are related with the programming computers, the
students have difficulties to understand the logic of the programming. We perceived that these schools didn’t have practical material to teach students before of
teach them as specific programming language and this is the step where students didn’t understand the base of this logic. Quick-Lac is created in order to
complete this area, because it is a software where the user that uses it follows the typical instructions of the programming, which are establish as a
pseudocode, but its applications is practical in order to be applied.<br><br>This kind of instructions are applied to manage of a virtual robot, due to allow us to
use for it control basic concepts of programming such are: cycles, control sentences, besides, the fulfillment of the code sees directly though the movements
the robot the effect that cause these instructions, understanding them easily and also encourage students to get a logic thought and creative when they is
checking in a direct way the program and sees the outcomes in a graphic way.<br><br>The student is developing the process of the program, cheek the
outcomes, evaluate any modify it is necessary.<br><br>Function: user introduces in the text editor the code that is needed in order to the robot does different
actions. When the user execute the code the robot does the instructions given into its graphic stage.<br><br>Internally the software is composed mainly of the
interpreter which translate the code in a programming language. This is done in Turbo C++ 2.0, that allows that the program bold be used in computers without
hard disk neither operative system.<br><br>
2006 - CS022
KEYQR-CODE: A DESIGN AND IMPLEMENTATION OF PERSONAL AUTHENTICATION SYSTEM WITH 2-DIMENSIONAL BA
Kohei Saijo
Tokyo Gakugei University Senior High School Oizumi Campus, Nerima, Tokyo, Japan
I designed a personal authentication system utilizing QR-code(*1). This system enables to increase the security level while requiring PC, Mobile Phone and
USB Camera.<br><br>(*1)QR (Quick Response)-Code is de-facto standard 2-dimensional bar code, similar to PDF417, DataMatrix and/or Maxi Code, used
with built-in camera in a mobile phone to encode QR-Code into textual information. More than 70% of Japanese owns a mobile phone, equivalent to 84 million
mobile phones.<br><br>The system utilizes QR-Code as a key to authorize individuals in cooperating designated PC via actual mobile network system as
follow: (1) Personal information needs to be registered before the usage, (2) A user sends an request mail to the PC, (3) The PC validates the mail address and
password, (4) The PC generates one time QR-Code sending back to the user fs mobile phone, (5) Received QR-Code is scanned over the USB camera on
the PC, (6) The user can be authenticated by validating generated QR-Code.<br><br>This system enabled following.<br><br>(1)Issuing a software key over
the distance via mobile network<br><br>(2)Enabling to control the registered information at the administrator side.<br><br>(3)Generating highly secure key
which includes various information.<br><br>The prototype implementation successfully illustrates positive features of KeyQR-Code, essentially based upon
characteristics of QR-Code (amount of information) with the mobile network system (everyone has a mobile phone) associating flexibilities of using IT
(programmable code), enabling to increase the security level while managing rapid authentication.<br><br>
2006 - CS308
GEOMETRIC COMPRESSION SYSTEM
Giuliano Sider, Rodrigo Luger
Escola Americana de Campinas, Campinas SP, Brazil
The Geometric Compression System (GeoZip) presents a new method of efficient data compression that can reduce file sizes by up to eight times. Rather than
performing the compression on the software level, GeoZip compresses data by altering the way data is physically stored on devices. The project involved the
development of the software involved in preparing the data for recording.<br><br>GeoZip rearranges streams of data to a system of polar coordinates (hence
the "geometric" aspect of the system). Data is plotted onto a "virtual disk" based on its byte values - in the future a device will be built to perform the actual
physical recording of the data. In the first round of compression, an entire layer of disc (composed of several tracks each at a different angle) is filled with data
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
points located at a distance equivalent to their respective ASCII values. The empty space in each track is then filled with additional data points at distances
equal to their values, however in differentiable shades of red, until there occurs an overlap (in which two data points occupy the same position). Upon this
overlap, data recording in that sector is then aborted and GeoZip proceeds to the next layer on the disk.<br><br>Because of the possibility of up to 255 data
points to be encoded in base 256, in a space where 255 data points may be plotted, maximum efficiency is obtained with diverse data sources such as
pictures, music, and videos. GeoZip’s capabilities in DVDs would allow for approximately five to eight times more running time on a disc. <br><br>
Awards won at the 2006 ISEF
Fourth Award of $500 - Team Projects - Presented by Science News
2006 - CS036
FRACTALS IN MEDICINE?
Lavanya Sivakumar
Franklin Regional Senior High School, Murrysville, Pennsylvania, USA
The purpose of this project was to write a program classifying grayscale MRI brain images as normal and abnormal using fractal geometry. Fractals are
geometric patterns that are repeated at multiple scales. Fractal dimension is a measure of the complexity of a figure. MRI's are radiological images of internal
organ systems imaged by directing a strong magnetic beam through the area. <br><br> I developed my software application within an IDE called MATLAB. I
first characterized the MRI images using fractal features. I also developed statistical measures for the asymmetry of an image. I computed the statistics of the
error vector obtained by subtracting one half of the image from the other half (split along the axis of symmetry).<br><br> After extracting the features from the
MRI images, I trained a neural network from a “training set” containing normal and abnormal images. The neural network I used was a single layer probabilistic
neural network (PNN) built using the neural network toolbox within MATLAB. The effectiveness of the neural network was then computed by applying the
trained PNN to a “testing set” of images.<br><br> My program differentiated normal and abnormal MRI brain images with a 95-100% success rate. I plotted my
data on scatterplots and found that the individual features were spatially clustered with very little overlap of normal and abnormal images. The fractal and
statistical features, coupled with the neural network, make a very powerful classifier that has great promise for biomedical applications.<br><br>
2006 - CS041
THREE-DIMENSIONAL FACE RECOGNITION FROM VIDEO: FACIAL SURFACE RECONSTRUCTION AND ANALYSIS USING TENSOR ALGEBRA AND
DIFFERENTIAL GEOMETRY
Justin Moore Solomon
Thomas Jefferson High School for Science and Technology, Alexandria, VA, USA
This paper presents the second phase of a multi-year project to achieve a functioning three-dimensional (3D) face recognition system. In concert with the series
of algorithms developed during the first year of research, this phase of the project addresses the problem of relying on laser range or structured light scanning
equipment for recognition input by devising a novel system of face shape reconstruction from video. Rather than using a simplistic linear model to express face
shape variation, the iterative algorithm presented in this project uses a “morphable” tensor model that accounts for changing expressions and visemes, or facial
positions resulting from the articulation of a particular sound. Face shape is optimized based on two error metrics: optical flow feature tracking and face surface
reflectance computation. Preliminary testing of the algorithm reveals that it is able to capture subtle face features, including nose shape and cheek structures,
which earlier feature tracking algorithms were unable to detect. Combined with the algorithms presented in the first year of research, these methods comprise a
workable system of recognition in which faces may be identified from video in varying poses, expressions, and lighting conditions. These findings represent
significant progress toward the development of a practical system of 3D face recognition and suggest the possibility of broader applications in computer vision
and graphics.
Awards won at the 2006 ISEF
Award of $500 - American Association for Artificial Intelligence
Honorable Mention Award of $200 - Association for Computing Machinery
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
Award of $1,000 - Mu Alpha Theta, National High School and Two-Year College Mathematics Honor Society
Tuition Scholarship Award in the amount of $8,000 - Office of Naval Research on behalf of the United States Navy and Marine Corps.
UTC Stock with an approximate value of $2000. - United Technologies Corporation
2006 - CS014
CAN PASSWORD BASED CHECKSUMS BE USED TO SAFELY AND SECURELY ENCODE DATA?
Elliott James Stepusin
Celebration High School, Celebration, Florida, Osceola County
This project is the experimentation of password-based checksums to encode and decode data. The initial purpose of this experiment was to create a simple,
but powerful data encoder. The ideas of password based checksums, eventually led to an algorithm that could successfully encode data based on an inputted
password, as opposed to byte encryption.<br><br> There were several challenges faced in this experiment. The program had to ensure that the output file
could only be decoded with the correct password. If someone tried to decode it using an incorrect password, the file would be ruined. This problem was solved
by implementing an encrypted line at the top of the file, which included the password, but without giving it away. This all proved successful and the final product
was able to encode a file, and protect it using just text. <br><br> After ensuring a perfect and efficient encoder, a decoder was created. It had to decode the file
100% exactly the same as the original. This was a hard task due too tabs, spaces, and returns through out the program. Parsing out all non-characters, and
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
printing them exactly as they were, along with encoded text solved this problem. It was key to keep spacing so that the file would have no errors. In the end the
program worked smoothly, along with the algorithm. All of the tested files encoded, and decoded very quickly and precise. And file sizes were all reasonable.
The method of password checksums, proved to be an efficient, and useful idea.
2006 - CS046
HOW LONG DOES A GAME OF WAR TAKE?
Michael Warren Swift
FHE, Flagstaff Arizona, USA
The card game War always takes a long time. I was curious about how long a War game usually takes. I decided to write a computer program to play it to
answer some of my questions. I used Mathematica. Once I had fixed all the bugs, I had my program play a million games. The program collected data about
the lengths of the games. I used this data to find various kinds of information, such as the average length, the most common length, and the median length. I
really enjoyed writing my program. The data showed that even length games were about 2.5 times as common as odd length games. I was able to explain this
interesting piece of information logically.<br><br>I learned a lot about programming, and I found out some interesting information. Building on my experience
with this program, I hope to write more complex programs in the future.
2006 - CS005
"MAGIC WORKPLACE STUDIO 2005": UNIQUE COMPLEX OF APPLIED SOFTWARE BASED ON SPECIALLY DEVELOPED TECHNOLOGIES
Anatoli L. Tsyporyn
Lyceum BSU, Minsk, BELARUS
This project presents a complex of software packages which forms a uniform system of electronic workplace. New technologies have been developed while
creating of applications:<br><br>FastExploring Technologies - accelerated browsering of web-pages with the use of clipboard. Multilanguage Project - creating
of projects using several programming languages. Computer Cinema Technologies - showing video-presentations in real time via Internet, using the modem
only.<br><br>Image Editing Technologies - changing the size of a picture without loss of quality.<br><br>Several author's components of library Magic
WorkPlace Component Library are introduced in the program: MW. Button, MW.CheckBox, MW.ImageControl, MW.Form, MW.ObjectLibrary, MW.ComboBox,
MW.ProgressBar. <br><br>MW is compatible with Micosoft® Windows® applications.<br><br>The scheme of the project:<br><br>Magic Workplace
Professional Edition 2005: MW Installation System 2005; MW Auto System 2005 ; Magic Office: Document Edition; The Editor of creation of schemes; MW
Media 2.0; Magic Office: Scripting Edition - system for creatinig Multilanguage Projects, containing editors of projects, schemes and library of ready objects for
HTML, JAVA Script.; Icon Edition; Web Explorer 2005 Deluxe - multiwindow system for browsering web-pages; ComputerCinema 2005 - program complex for
video observation of the PC user's actions.<br><br>MW 2005: Mobile Edition - electronic variant of a mobile phone. Set of number; Mail; Folder "Media"; Tools;
Support Macromedia® Flash® of games, prompts.<br><br>Help system for all applications. Support: www.asc.by.ru
2006 - CS044
THE UNPREDICTABLE IS PREDICTABLE
Alejandro Sebastian Valdivia
Grants High School, Grants NM, USA
The hypothesis of this project is that the spread of a contaminant in a classroom is not random. Also the comparison of the computer simulated random spread
and the scale model random spread will be similar to one another but different from the classroom experiments.<br><br>The scale model simulation was
prepared by making a 1/24th scale box. Fifteen clean marbles and one marble coated in glo-germ were randomly dispersed within the box. The box was placed
on four shaker-mixers and after one minute the marbles were checked for contamination. This was repeated twenty times. <br><br>The computer simulation
was designed to simulate a random contamination similar to the scale model simulation. This was done using Visual Basic.<br><br>The classroom experiment
was performed by coating an object, within the classroom, in glo-germ. After one hour each student was scanned with a black light for glo-germ. The results
were recorded and test multiple times.<br><br>This project showed that the spread of a contaminant is not random. The scale model simulation and the
computer simulation were very similar to one another with the average infection percentage being 100% and 92.81%. This is very different than the classroom
contaminant spread, which only had an average infection rate of 37.31%. This proves that the contaminant spread in the classroom is not random because it is
not similar to the random spreads. Theoretically, a scientist could then predict a trend, which could be used to help prevent the spread of a disease or
contaminant.<br><br>
2006 - CS012
TEACHING A COMPUTER TO UNDERSTAND ENGLISH
Michael Eric Warkander
Rutherford High School, Panama City, Florida, USA
Language is a mostly arbitrary combination of symbols. I have written a program in PHP that allows a computer to parse, or interpret the structure of, sentences
as people do.<br><br> My parser works in essence by diagramming sentences. It looks up each word in its database, finding, for instance, that "tree" is a
neuter singular noun with a unique identification number. Then it calls a recursive function that can build any grammatical structure by calling itself to build
smaller components. <br><br> Most words have many possibilities for their characteristics; the parser loops through multiple combinations of possibilities until
one can form a sentence. For example, for "Bob ran", the verb "bob" with the verb "ran" cannot work (two verbs), but the noun "Bob" and the verb "ran" can.
<br><br> To make the problem manageable, I made the parser as "dumb" as I could. If a combination fails, the parser just tries the next combination. Similarly,
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
each time it needs a certain kind of phrase, it operates the same way, for example, to generate a noun phrase as a sentence's subject or a preposition's object.
<br><br> The parser that now exists is the first step toward a computerized question-answering system to find information on the Web. The full system would
work by parsing a question, finding and parsing information from the Web, matching the question to the relevant information, and generating sentences from
that information. Besides being the main component of the system, the parser would be the basis for the sentence-generator.
2006 - CS303
CONTEMPORARY 2D PROGRAMMING
Yue Kenneth Yuan, Drew Bratcher
Houston County High School, Warner Robins, GA, USA
The purpose of the experiment was to create a graphical and working game using the Java programming language. To do this, we had to acquire a Java
programming book, learn the Java programming language, download the Eclipse IDE, write the code using the book as a guide, run the code as a Java applet
for errors, and make modifications if errors occur. We have achieved our engineering goal by creating a 2-D side scrolling video game with Java language.
Along the process, we learned a lot about graphic designing and computer programming.
2007 - CS049
SMART WEB BUILDER
Andrey I. Sumin
Lyceum of Information Technologies # 1537, Moscow, Russian Federation
The aim of the project is to create a software product that can be used to develop web pages, Internet sites and interactive web applications and also to provide
comfortable work with FTP servers. The project is up-to-date because the role of Internet technologies is constantly increasing practically in all areas of human
activity. More and more people find themselves involved in creating and using Internet applications, that’s why it is necessary to make the methods and tools of
high-quality web development easier and more advanced. As a result a powerful web project development tool has been created.<br><br>The program has
been mainly implemented in Borland C++ Builder IDE. The code of the program has been split into modules with separate classes encapsulating the
implementation of a multiple document interface, edit window management, syntax highlighting, code insight, a code sweeper, a built-in code style control tool,
FTP support and a multilingual interface. It is possible to flexibly extend the features of the program due to the system of plug-ins implemented as dynamic link
libraries.<br><br>Site Manager integrated in Smart Web Builder makes it possible to effectively arrange the file structure of the Internet application under
development and gives the opportunity to create a project by a group of developers. As to interaction with an external environment, one of the most
considerable service tools in Smart Web Builder is an FTP explorer providing comfortable ways of working with sites via the FTP protocol. The library of
WinInet functions has been used to implement the FTP explorer. Smart Web Builder has a set of specialized tools and services built into it. It is possible to view
the web page being edited in the built-in browser, in external browsers and on the local server. The program can check whether html, xhtml and xml code
complies with the industry standards, it has context help and reference books with functions used in web programming languages.<br><br>The developed
software product makes it considerably easier to create high-quality Internet applications and has a wide range of application: from creating small home pages
to developing dynamic web portals by both professional web masters and amateurs.<br><br>
2007 - CS047
PDOCA MOBILE HEALTH MONITORING
Alexey M. Antonov
Lyceum of Information Technologies # 1533, Moscow, Russian Federation
Lately, as miniaturized sensors and inexpensive communications channels emerged, mobile health monitoring applications became feasible. Mission-critical
medical data (pulse rate, blood pressure, oxygen level in blood) can be collected and pre-processed ‘on-board’ and/or transmitted online to the powerful
desktop platform.<br><br>“Personal Doctor Assistant (PDocA)” project uses Microsoft Windows Mobile smartphone (handheld computer, communicator)
functionality in conjunction with Microsoft .NET Compact Framework 2.0 development environment. C# language in Microsoft Visual Studio 2005 was used.
Microsoft Bluetooth Stack and SerialPort profile were used for PDocA to desktop communication; SMS protocol for Windows Mobile makes cellular network
connection possible.<br><br>Three smartphone-based PDocA applications include:<br><br>- two versions of Lusher psychological test implementation
(standalone mode)<br><br>- fatigue psychological test implementation (standalone mode)<br><br>- Nonin 4100 pulseoximeter mobile client application<br>
<br>The latter consists of Bluetooth data communication module and data visualization module.<br><br>Desktop client program facilitates collecting, viewing,
commenting and editing of sensor data performed by an expert; these data are saved using XML format. At the same time, the person wearing sensor can
mark certain segments of data graphs (e.g., extreme physical load) using mobile device buttons. Microsoft Excel can be used for data analysis.<br><br>The
planned enhancements include Widcomm Bluetooth Stack support and Microsoft ActiveSync data synchronization mode.<br><br>
2007 - CS027
ENHANCING PASSWORD VERIFICATION WITH KEYSTROKE USING SEQUENCE ALIGNEMENT ANALYSIS
Phiradet Bangcharoensap
Buranarumluk School,Trang,Thailand
Recently, Keystroke Verification has received an increasing attention for improving password security system. Indeed, password alone cannot guarantee the
security of the system since it can be intruded in several ways. On the other hand, keystroke verification is based on personal behavior; hence it is hard to be
stolen. Moreover, this behavioral biometrics is difficult to be forged and its verification does not require any additional hardware. However, this method has
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
higher overall error rates than physiological biometric such as fingerprint. That is why it still has not been used extensively.<br><br> To reduce the overall error
rate, this project is intended to study the application of sequence alignment algorithm to analyze keystroke data. This algorithm is based on Sequence
Alignment which is often used in biology, for example for finding relationships between primary sequences of DNA, RNA, or protein. Experimental results with
volunteer subjects show that with password composed of 8-20 characters, the sequence alignment based keystroke verification gives less than 10% EER
(Equal Error Rate), 7% FAR (False Accept Rate), and 16% FRR (False Reject Rate).<br><br> This project can be applied for enhancing password security
system used in several computer systems (e-mail, ATM, log in to the system, etc) or for protecting data from unauthorized users.<br><br>
2007 - CS303
PROTOTYPE TWO: CONNECTED COMPONENTS, HISTOGRAM INTERSECTION, AND CENTROID TRACKING TO IDENTIFY STATIC OBJECTS: A
THREAT DETECTION PROGRAM
Christopher Robert Bethel, Elizabeth Alice Ennis
Lake Brantley High School, Altamonte Springs, Florida, USA
With all of the tragic events that have occurred recently in our country, terrorism is a large and controversial subject. In most large public areas, such as airports
and train stations, there are many security precautions being taken, especially concerning unattended luggage. This project aims to relieve some of security’s
dependency on the human eye. This project will construct a computer program capable of identifying static objects that could be considered possible threats.
Matlab will be used to compile this program. <br><br> This program is able to identify, label, and track objects. An initial background model is created using the
means and standard deviations from the rows and columns of the frame. A statistical background subtraction is used in order to find the difference between the
model and the subsequent frame. Histogram intersection is then used to associate objects. Centroids are used to track the objects. The centroids are the
centers of each object and this program will be able to determine if that centroid has moved from frame to frame. This program is capable of tracking numerous
objects at one time. If an object is found to have been static for a preset interval of time, then an alert will appear on the computer screen containing a boundary
box around the potential threat. A security guard can then elect to discard this alert, or take the necessary precautions.
2007 - CS307
DON'T TUMP OVER A HEURISTIC CARGO HOLD MODEL
Charles Samuel Boling, William R. Laub
Manzano High School and Eldorado High School, Albuquerque, NM, USA
We set out to computationally model a two-dimensional cargo hold and find adequate methods for loading any given set of rectangular packages. Due to the
problem’s computational complexity (it constitutes an instance of the knapsack problem and is documented as NP-hard), it is necessary to approximate loading
methods. Our program devises “good” loading methods (defined as those that are acceptably fast, leave little empty space in the hold, and balance the center
of mass in the center of the hold) comprised of percent chances to use one of several computationally simple loading methods.<br><br>The program first
populates a list of packages with lengths, widths, and masses.<br><br>From these characteristics it determines characteristics such as density and
“squariness,” or the closeness of a package’s dimensions to those of a square package. It then generates a self-organizing map, or SOM, a type of neural
network used as a search algorithm. A self-organizing map contains a number of output nodes and a few input nodes, each with certain attributes. In our case,
node attributes are percent chances to use one of several computationally simple loading methods. A package may be loaded using one of several different
“greedy approximations,” which prioritize packages in accordance with their characteristics. A greedy approximation by mass, for example, first loads the
heaviest package, then the next heaviest, and so on. Each node, then, stores five percent chances from 0 to 100: greedy approximations by mass, area,
density, and squariness, and a chance to pick an unloaded package at random and load it.<br><br><br><br>
2007 - CS009
MAKING WAVES A 3D SOFTWARE SIMULATION OF WAVE CREATION AND PROPAGATION
James Daniel Brandenburg
Cocoa High, Cocoa Florida, United States of America
The purpose of this project was to develop software to accurately model water wave creation and wave propagation in 3D, explicitly modeling interference,
reflection, diffraction, and attenuation. This project allowed opportunities to learn new mathematics, programming methods and lab techniques.<br><br> This
project consisted of two distinct procedures; software development and laboratory work. Software Development resulted in the ability to specify wave pool
dimensions and characteristics, simulation of wave propagation, and display of the water surface in 3D. To manage software complexity, three separate
programs were created; a wave pool editor that allows specification of environmental data; a calculation compiler that simulates the water surface and saves
iterations into a binary file format; and a program that replays the iterations as a full motion simulation of the water surface in three dimensions. Laboratory
Work consisted of building a wave pool one meter square by ½ a meter deep, developing measurement techniques, 102 drop tests and analyzing 3,460 digital
photographs and 380 full motion videos. 17 spherical objects were tested to measure how the radius, density, mass, and drop height affect amplitude,
wavelength, and speed of resulting waves. The resulting data sets were averaged and three five-variable functions were written modeling wavelength,
amplitude, and propagation speed. These functions relating the dropped object’s properties and the resulting waves were used to calibrate the calculation
complier. <br><br> The researcher concluded that it was possible to create a software model to simulate wave creation and wave propagation explicitly
modeling interference, reflection, attenuation, and diffraction.<br><br>
Awards won at the 2007 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
Second Award of $150 - Patent and Trademark Office Society
Award of Merit of $250 - Society of Exploration Geophysicists
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
First Award of $3,000 - United States Air Force
Third Award of $1,000 - United States Coast Guard
2007 - CS301
THE EFFECTIVE OPERATION OF A TACTICAL, WIRELESS ROBOTICS UNIT THROUGH THE INCORPORATION OF A WI-FI CAPABLE PARABOLIC
DISH; A YEAR TWO STUDY
Brian Jonathan Cain, Ross Anthony Mittiga III
The Villages High School, The Villages, Florida, United States of America
The purpose of this experiment was to test the effectiveness of using a parabolic receptor dish, an alternative for a generic Wi-fi adapter, in operating
unmanned vehicles and performing tactical commands for both civilian and military purposes.<br><br>Experimentation began with the assembly of a mobile
mechanical platform, along with its necessary equipment: robotic arm, serial servo controller, wireless camera, onboard PDA and all pertinent software. The
tactical platform itself did not require assembly; however, as each component was constructed it was tested to ensure its reliability and performance
capabilities. All servo safe ranges and stop points were tested and found. The parabolic dish was retested with the same procedure as the previous year’s
experiment. The connection in a point-to-point network with the dish and the PDA was also tested over a mile.<br><br>The ability to control the servos was
primarily based on knowing what pulse widths to send to the servo controller. It was found that the robot itself had no “set” position at each pulse width; instead,
they controlled how fast and in what direction the wheels spun. Each servo in each joint of the arm however, had a certain position it would move to depending
on the amount of power applied. The camera was able to operate effectively and reliable over the internet. The Internet connection between the PDA and the
dish was reliable and strong over a span of one mile. We firmly believe the hardware and software concepts can be applied in any number of applications.<br>
<br>
2007 - CS313
NEWTON'S METHOD OPTIMIZATION
Jose Juan Chavez, Juan Alberto Garcia
Americas High School, El Paso, Texas, USA
roject in it's present form consists of a computer program that can solve virtually any type of polynomial by applying Newton's Method of Approximations. The i
dea of the project is to create an efficient, fast, and versatile computer software that can prevent the time consuming process of dealing with polynomials. First
we conducted research about Newton's Method, to gain full u derstandment of the method. Then we wrote the computer program, it was imperative to create
a code in java that was efficient, original, versatile. The program was successfully created, and allows the user to choose the degree of the equation, enter the
value of the coefficients, and select the number of iterations that are needed to get the level of accuracy required. Once the program was running, in order to
test it's reliability and efficacy, the program was submitted to test 100 different polynomial related problems, with different level of complicity and of which the
answer was already known. Then the results obtain were compared with the expected, in order to prove how accurate the program was. After the testing, the
software got a high level of accuracy, this supports our hypothesis that the computer program is accurate, reliable and efficient, so we can conclude that the
program that applies Newton's Method was successfully created.
2007 - CS024
KNOWLEDGE DIAGNOSING SYSTEM BASED ON PROBABILISTIC DEPENDENCY
Taejin Chin
Daejeon High School, Daejeon, South Korea
Many students over the world are struggling to improve their understanding in study. To evaluate the degree of understanding, identify weak sections, and
determine the priorities of study, the ratio of incorrect answers from the examinations is usually used. However, with that method, the results from few
examinations are not enough to accurately diagnose students' understanding and the identified sections as weak points do not imply the correct order of study
because they don't consider the dependency relationships among the sections. In this work, I suggest the knowledge diagnosing system using probability and
the dependency relationship among knowledge. Based on the dependency relationship modeled using Bayesian Belief Network, the system provides the
probabilistic degree of understanding of each section and the order of sections with priorities for efficient study. The system also has the feature to help
teachers to prepare lectures by presenting the understanding of students in a specific class, and the feature to help a university or a company to select students
by listing them appropriate to its purpose. The system has been implemented as a tool and its usefulness is validated by a case study applied to 63 high school
students in Mathematics. This system can be run with any test in any subject, so that easily it will help students to study efficiently.<br><br>
2007 - CS031
GENETIC ALGORITHM ANALYSIS IN A C++ CODING ENVIRONMENT
Jason David Chodyniecki
DuPont Manual High School, Louisville, Kentucky, United States of America
The object of this experiment was to find a way to most efficiently solve the Traveling Salesman Program using a genetic algorithm. They are both very
important technological factors that can be used in anything from car building to interplanetary navigation. The genetic algorithm consisted of four operators.
These are the mutation, crossover, selection, and scaling operators. The crossover and mutation operators were held constant, but the selection and scaling
operators were tested with each other. The testing area was set up so that there were one hundred cities arranged in a circle.<br><br>The different selection
methods that were used were roulette wheel, Stochastic Universal Sampling, Tournament, and an alternate tournament. The different scaling types used were
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Boltzmann, Rank, and Sigma. A control group was also used without any scaling.<br><br>It was hypothesized that Roulette wheel selection with rank scaling
will be the most efficient, while SUS selection with Boltzmann scaling will be the least efficient.<br><br>Ten tests were run with each combination of
techniques, and the results were recorded. The results showed that SUS selection with any scaling technique was consistently the worst of them all, but
Alternate tournament selection with Sigma scaling was the absolute least efficient. The Tournament selection with any scaling was the best way to go. It never
ran above 100,000 or prematurely converged. It ran at its best with Rank selection, but it did well with all the others. With the completion of this experiment,
more efficient Genetic Algorithm coding practices have been found.<br><br>
2007 - CS005
PIZZA “PANTENNA”: BUILDING AND TESTING A CHEAP WIFI ANTENNA
Nicholas Mycroft Christensen
Wetumpka High School, Wetumpka, Alabama USA
My hypothesis was that I could build a working WiFi (2.4 GHz) antenna from inexpensive materials, such as a pizza pan, copper wire, coaxial cable, and basic
N- connecters. Using instructions on the Internet as a general guide and computcr modeling, I built a biquad “pantenna.” I connected it to a router and picked
up its SSTD beacon (“Mho-RF-gain”) with the “NetStumbler 4.04” computer program. <br><br>I got permission to use a professional network analyzer to
measure and tune my “pantenna’s” Standing Wave Ratio to 1.1785, which was better than the 1.4521 of the control, a standard dipole antenna. <br><br>For a
radiation field test, I set the “pantenna” on a tripod, powering it from a car battery. With a laptop computer 250 feet away. I took strength measurements at three
polarization <br><br>angles for both my "pantenna” and the control dipole. Again my “pantenna” tested with a better peak gain of 11.6875 dBi vs. the rated gain
of 5 dBi for the professionally made dipole. <br><br>My final informal tests were to simply check for reception readings. The antenna picked up 205 different
access points from a city balcony, and from a car moving around an urban neighborhood. it picked up 1122 in two and half hours. <br><br>Unexpectedly, my
“pantenna” not only worked, hut tested at higher strength than the standard dipole, showing that impoverished areas and third-world countries could easily build
homemade antennas to use for wireless applications, especially connecting to the Internet. <br><br>
2007 - CS040
COMPARING TWO RANDOM NUMBER GENERATORS USING STATISTICAL TESTS MULTIPLY WITH CARRY GENERATOR VS. COMBINED LINEAR
CONGRUENTIAL GENERATOR
Michael Andrew Cogswell
Jefferson High School, Shepherdstown WV, United States of America
The purpose of this project is to compare random number generators (a Multiply With Carry generator (MWC), and a Combined Linear Congruential Generator
(CLCG)) in order to find out which one would perform better when submitted to statistical tests, and therefore would be a better generator. Each number was
generated and tested by a program I wrote. <br><br> In order to test the random number generators a program was written. This program applied the
Frequency (Equidistribution) test, the Serial test, the Gap test, the Poker test, and the Coupon Collector’s test. Each of which are statistical tests that judge the
ability of a pseudo-random number generator to generate random numbers.<br><br> The results of these tests did not come out as was hypothesized; I
thought that the CLCG would be the best generator. This was not the case. The MWC came in first. The C# generator (one of the control generators, it was
from the programming package used to create the program that evaluated the tests) came out second. The CLCG came out third, and the LCG (a control from
which the MWC and the CLCG are based) came out last. So, in a one-sentence conclusion, the MWC was found to outperform the CLCG in most areas, and is
thusly recommended by this study.<br><br> The CLCG came in third instead of first or second because its performance varied from first to third throughout the
tests, where the MWC was a consistently high performer.<br><br>
2007 - CS011
STAROS DOORS 2.0
Mahmoud Emad Darawsheh
Modern System Schools ,Amman,Jordan
This Project presents the way to make Linux easier and more interesting in use then what’s the doors will be opened after Linux come easier and especially the
economical benefits and the portable business ways.<br><br>I found that Linux will be easy if I added to it an easy GUI .<br><br>I used Vb6 To create The
GUI with Some Graphics software ,then I ran it on Linux using Wine (Windows Emulator) with small configurations and installation then I found new applications
from running the GUI using Wine like to make a portable GUI and the sharing GUI.<br><br>Then I tried To Run it in 3 PCs and I found that:<br><br>1.The
CPU speed and Ram Size effect on the speed of work <br><br>2.the free space effects on some operations (Copy Files ,and loads some files types)<br>
<br>3.version of the host OS (Linux) And configuration of wine in the style and the hardware support , the place of wine "c" <br><br>4.the user type :its controls
by the permissions for editing data on some places<br><br>This Project made:<br><br>1.GUI makes Linux easier and better for personal use<br><br>2.the
way to make a portable GUI and portable business<br><br>3.New benefits in many of deferent departments(Education, Business, Economy )<br><br>4.The
way To have fun with using Linux high protection and security at the same time with no complications<br><br>
2007 - CS018
PING ME! OPTIMIZING CODE FOR CLUSTER COMPUTING
Erika Alden DeBenedictis
Saint Pius X High School, Albuquerque, NM, U.S.
The objective of this project is to construct, use, and optimize a cluster computer to solve large problems and investigate how the structure of the problem
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
determines the efficiency of the cluster. <br><br>This area of computer architecture is important because it holds the promise of increasing the speed of
calculation to enable applications of increased number, scope and complexity at a cost-effective price. Further, silicon computer chips will reach the physical
limits of the material and the only way to improve computer speed will come from parallel processing. Cluster computing best represents the cost benefits of
parallel processing.<br><br>The objectives of this project were realized. A cluster has been constructed out of surplus desktop computers with varying speeds.
A control networking program, based on Windows networking commands, allows the computers to communicate and distributes calculations accounting for
varying computer speed. In addition, two applications have been parallelized and executed on the cluster, revealing the strengths inherent to cluster systems.
The first application calculates the Mandelbrot fractal, involving a program structure that requires large amounts of data transfer for little calculation. The
second, Solar Sim, calculates gravity-assist routes throughout the solar system and requires little communication and large amounts of calculation. While each
application required different methods of program parallelization, both were optimized for the cluster with significant, linear runtime improvement demonstrated.
<br><br>Using cluster architecture produces meaningful results quickly and effectively, much more so than a single computer.<br><br>
Awards won at the 2007 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
Award of three $1,000 U.S. Savings Bonds, a certificate of achievement and a gold medallion. - United States Army
Second Award of $1,500 - United States Air Force
2007 - CS017
GENERATOR - THE NEXT GENERATION: THE HYBRIDIZATION OF GENETIC AND ANALYTICAL ALGORITHMS FOR GENE SEQUENCING
Alexander Matthew Diedrich
Century High School, Rochester, MN, USA
The purpose of this experiment was to determine the effects of the combination of genetic algorithms and standard techniques on DNA sequencing. The
hypothesis was that the combination of the two techniques would do better than either technique alone. A program was written that performed a standard
analytical technique in which each fragment was set to every position in order to determine best match. After this was repeated for all of the fragments the
results served as the basis for the genetic algorithm program which simulates evolution. In the genetic algorithm sequencers are made based off of the results
of the analytical portion of the program. These sequencers then compete against each other and only the top percentage are allowed to continue to pass their
genes on to the next generation. This process continues for 30,000 generations and at the end a final result is known. The results of this experiment showed
that the combination of the two techniques sequenced 76.34 nucleotides out of 150 in the DNA strand. This result was statistically significantly higher than
either technique alone, thus supporting the hypothesis.
2007 - CS003
IMPROPER FRACTIONAL BASE ENCRYPTION
John Wilson Dorminy
Sola Fide Home School, McDonough, GA , USA
nted an original method of encryption using improper fractional bases (IFBs). Statistical testing of my new
IFB encryption indicates that there are no appare
nt biases in the encryption output. <br><br>Quite importantly, I found natural redundancies in the representations of integers in improper fractional bases.
I created an entirely new method of representing IFB integers. Depending on the IFB used, one reduced redundancy integer representation might be any on
e of many different decimal integers. Using t is method of representation, I created a secure method of encryption utilizing improper fractional bases. <br><br
>I implement this improper fractional base encryption as a block cipher in a Python program. This program, IFB-1, uses both confusion and diffusion to hide the
message from attackers. <br><br>A common statistical randomness tool tested the output of 100 sets of IFB-1 encryption. The statistical results indicate that
the output of IFB-1 encryption is indeed random, demonstrating that IFB-1 encryption is not statistically flawed. Further cryptanalysis yielded three potential
attacks, but all three are specifically addressed and prevented in the final version of my algorithm. No attacks have been proven practical against IFB-1
encryption. <br><br>The new reduced redundancy representations of IFBs allowed me to create the first secure method of encryption using improper fractional
bases, opening a new area for encryption exploration.
Awards won at the 2007 ISEF
First Award of $1,000 - Association for Computing Machinery
$15,000 Award for Scientific Depth and Rigor - Alcatel-Lucent
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
2007 - CS014
APPLICATIONS OF MULTI-VALUED QUANTUM ALGORITHMS
Yale Wang Fan
The Catlin Gabel School, Portland, OR 97225, United States
Central to the practical realization of quantum computers is the development of versatile quantum algorithms for solving classically intractable problems in
computer science. This project generalized both the binary Deutsch-Jozsa and Grover algorithms for quantum computation to multi-valued logic over arbitrary
radices, and hence larger classes of problems, using the quantum Fourier transform. The mathematical formulation of these algorithms was thereby extended
to Hilbert spaces of any dimension. New applications of these multi-valued algorithms, as well as their computational advantages over the binary cases, were
discovered. The extended Deutsch-Jozsa algorithm is not only able to distinguish between constant and balanced Boolean functions in a single query, but can
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
also find closed expressions for classes of affine logical functions in quantum oracles, accurate to a constant term. This is achieved with the phenomenon of
quantum parallelism and has potential applications in image processing. Furthermore, the multi-valued extension of the Grover algorithm for unstructured
quantum database search requires a substantially smaller memory register to implement and less wasted information states, which bears applications to
various NP problems.
Awards won at the 2007 ISEF
First Award of $1000 - IEEE Computer Society
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2007 - CS039
DIFFERENTIATING RANDOM SEQUENCES WITH A GENETIC ALGORITHM
Neil Taylor Forrester
Falmouth High School, Falmouth, MA, USA
High quality Random Number Generators (RNGs) are essential for secure cryptography. A new method of testing the quality of RNGs using Genetic
Programming has been developed. To test a pair of RNGs, a Genetic Algorithm attempted to create programs capable of accurately sorting sequences from
each RNG according to their source. If it is possible to sort sequences in this manner, then one or both of the RNGs must not be random. A Genetic Algorithm
was chosen because it does not require knowledge of the details of the RNGs.<br><br> The Genetic Algorithm begins by randomly creating an initial
population of programs. It eliminates programs that sort poorly and creates new programs by combining parts from programs that sort well. To determine how
well each program sorts, it evaluates 128 sequences from each RNG, producing a value for each sequence. Then the algorithm decides whether these values
can be used to divide the sequences into two groups. This entire process is analogous to natural selection, and is repeated for at least one thousand
generations of programs. Programs gradually improve and better solutions evolve.<br><br> The Genetic Algorithm has occasionally created sorting programs
capable of differentiating between the output of a Pseudorandom Number Generator and that of physical sources generally accepted as truly random. Only one
successful program is needed to prove that an RNG is not random. Successful programs sort with 90%-97% accuracy. Another possible application is
classification of unknown random number streams, which is potentially important in a cryptanalytic attack.
Awards won at the 2007 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2007 - CS007
WIFI - A BLUEPRINT FOR SUCESS
Kyle Daniel Fraysur
The Villages High School, The Villages, FL
The purpose of my project is to design a WiFi network for one facility using only blueprints. My hypothesis is that it is possible to layout the most efficent
location access points in a facility based on predictions supported by data collected from other builidings with similar construction. <br><br>I got a set of
blueprints of the target facility. I measured wall thickness. Then I went to two other buildings with apparently similar construction and took test signal penetration
measurements. I graphed the results to show signal strength to distance ratios for wall thickness. On the blueprint, I calculated how far you can go before
reaching the maximum allowable signal strength loss. I placed a maker at this point on the blueprint and continued this process for the whole building. At the
target building, I took readings from the spots marked on the blueprints. I marked the actual locations on the blueprints with a different color marker. When the
survey was complete, I compared predictions to actual number and location of access points.<br><br>Although the facilities I gathered data from had similar
construction, the predictions I made were not supported. You cannot lay out the most efficient location of WiFi access points in a facility based solely on
predictions supported by data collected from other buildings with similar contruction.
2007 - CS313
NEWTON'S METHOD OPTIMIZATION
Juan Alberto Garcia, Jose Juan Chavez
Americas High School, El Paso, Texas, United States
This project in it's present form consists of a computer program that can solve virtually any type of polynomial by applying Newton's Method of Approximations.
The idea of the project is to create an efficient, fast, and versatile computer software that can prevent the time consuming process of dealing with polynomials.
First we conducted research about Newton's Method, to gain full understandment of the method. Then we wrote the computer program, it was imperative to
create a code in java that was efficient, original, versatile. The program was successfully created, and allows the user to choose the degree of the equation,
enter the value of the coefficients, and select the number of iterations that are needed to get the level of accuracy required. Once the program was running, in
order to test it's reliability and efficacy, the program was submitted to test 100 different polynomial related problems, with different level of complicity and of
which the answer was already known. Then the results obtain were compared with the expected, in order to prove how accurate the program was. After the
testing, the software got a high level of accuracy, this supports our hypothesis that the computer program is accurate, reliable and efficient, so we can conclude
that the program that applies Newton's Method was successfully created.
2007 - CS006
COMPUTER SIMULATION OF EROSION PATTERNS
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Adam Thomas Goss
Brush High School, Brush CO, United States of America
Computer Simulated Erosion Analysis (CSEA) is a program designed for the benefit of meteorologists, scientists, and the general public. It is a program that
abides by the laws of the chaos theory, and works on a principle of random movement to simulate the natural process of erosion. Because erosion is a highly
complicated process, CSEA only focuses on the two biggest factions in causing erosion; wind and rain. CSEA’s main goal is to use topographic maps of
existing land masses, and predict the most probable erosion patterns that these bodies will undergo in a given time frame. From these erosion models,
contractors, environmentalists, scientists, meteorologists, and even the general public will benefit from these models. For example, a contractor would benefit
from knowing how a landmass will erode over a period of twenty years so he can determine the best location of a factory, for both economical and
environmental purposes. Another use of CSEA’s erosion models is in mountainous regions. By determining how mountains will erode over time, scientists will
be able to use this data to help prepare for forest disease spread, forest fire prevention, and an array of other valuable processes. The end goal of CSEA is to
model erosion, determining the most probable erosion patterns, so that people all over the world will be able to use this new technology to save money, time,
and lives.
2007 - CS308
FACTORS, FORCES & FORECASTING STOCK MARKET MODELING SIMULATION
Christian Stephen Hammond, Quinton Smith, Tyrus Sanders
APS Career Enrichment Center, Albuquerque, NM, USA
Genetic algorithms were developed in the 1970’s by John Holland at the University of Michigan. Genetic algorithms are used to solve optimization problems in
chemistry, engineering and other fields that involve math. In today’s stock market it is difficult to decide when to buy and sell stocks. In our project we are using
genetic algorithms to find the best times to buy and sell stock. Genetic algorithms will help us to develop stock trading strategies to maximize profit and reduce
downside risk. Our strategies use two stock technical indicators – price moving averages and trading volume.<br><br><br>The process described in the
previous paragraph is difficult. The main challenge centers on the fact that all stock technical indicators have one or more parameters to determine how they
are calculated. It is very time consuming to manually narrow down these parameters to their best values for a particular stock. <br><br> <br><br>So, how does
one find the best parameter settings to produce the best returns on stock trades? We believe that is possible to find the parameters that indicate when to buy
and sell stocks to receive the best returns by using genetic algorithms. We used Visual Basic Express 2005 to develop a computer program that uses a genetic
algorithm and historical stock price data to develop strategies for timing stock buys and sells.<br><br> <br><br>
Awards won at the 2007 ISEF
Award of $1,000 - Association for the Advancement of Artificial Intelligence
Fourth Award of $500 - Team Projects - Presented by Science News
2007 - CS016
FREE MOUSE
Rui Huang
Twenty-fourth Middle School, Chengdu, Sichuan, China
The purpose of this project is to invent a kind of mouse that can be used freely without the fixed support such as a table, so that it may be applied in more jobs
and reduce the users'fatigue. It's main characters are low cost, acceptable to more people without changing their former habits when using it. <br><br> There
are two ways in making this kind of mouse. One is to alter the former cursor that scans a fixed plane into one that scans a ball surface which moves together
with it. The"universal axis"that is particularly designed supports the aberrant-barycenter ball, the ball is fixed under the cursor. The cursor, via the human hand,
will obtain relative displacement away from the ball surface, thus realizes the operation of the mouse. The other way involves aberrant-barycenter wheels,
which are stalled vertically, and a mechanical electronic mouse raster axis. The two devices are connected through a rate changeable driving system. With the
help of human hand, the relatively stationary wheel drives the raster to operate.<br><br> Considering the time, the free mouse designed by above first method
operates easy and accurate. It can be used with or without a fixed support. For example, we can use it either dependently e.g. at work, for drawing and games,
or independently such as in sofa, bed or for multi-media display. What's more, the users need no special training in order to manage it freely with no trouble.
Therefore, it may greatly reduce the users'fatigue caused by long time work. <br><br>
2007 - CS001
MIXED REALITY AND ITS APPLICATION IN THE REALM OF MEDICAL REHABILITATION
SHAZ K JAFRI
Lake Highland Prep School, Orlando, Florida, USA
Reality is a new form of technology hybridizing tangible elements of physical reality with virtually created programs parlaying an immersive experience to the i
ndividual. In the researcher's experiment this is applied to the medical and health industry, namely Rehabilitation, along with Education and Graphics Desig
n, to observe "Neuroevolution" and bolster the immersive environments. Currenly, medical rehab has been lagging behind when it comes to person
alization, and traditional teacher preparation is of little use if teachers cannot effectively engage students and manage classrooms. Thus present experi
ments utilizing this MR technology were designed to both remove these plaguing issues and observe the processes of Neuroevolution.<br><br> Novel pams w
ritten in "C++", "JAVA" and "Groovy" created evolving interactive virtual Landscapes. A previously created Mixed Reality medical rehabilitation landscape
was enhanced through two "Neuroevolution" Experiments: One created a virtual teacher training landscape for the purpose of behavioral evolution, the other
created a point-light landscape to observe environmental evolution. In each of these environments live interactors in realistic role-playing scenarios were
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
required to perform a series of tasks. In response to their performances, cues, indicators, and virtual aid (derivatives of Neuroevolution) were incorporated via
on-sight programming, so that each subsequent trial was enhanced accordingly.<br><br> Utilizing the newly observed principles of Neuroevolution, marked
improvement in all venues was observed. Time (the vehicle of measurement) was decreased in the medical landscape. Behavior was kept at minimum
"disruption" level in the teaching environment. Extrapolation in the point-light landscape observed high percentages of accuracy.<br><br>
Awards won at the 2007 ISEF
Tuition Scholarship of $120,000 - Drexel University
2007 - CS035
PROGRAMMING A PANDEMIC SIMULATION: ANALYZING THE EFFECTIVENESS OF MEDICAL RESPONSE PROCEDURES
Nolan M. K. Kamitaki
Waiakea High School, Hilo, Hawaii, United States
This science project involved programming a simulation to observe three hypothetical pandemic situations infecting an age-specified population to identify the
best way to respond to an outbreak. Observing and graphing the three diseases with various parameters determined the most effective antiviral distribution
method, the best quarantining method, and crucial times within the virus’ spread for each virus type. A Java-based software simulator, NetLogo, was used to
design the programs.<br><br> The results of repeated testing revealed that in the first and third (high mortality) diseases, giving all antiviral doses to youths
was the most effective distribution method to save lives. When testing quarantining methods, the simulation results indicate that in the first and third diseases,
quarantining adults would be the best way to decrease mortality rates. Contrastingly, the second (low mortality) disease showed that distributing the antivirals to
the elderly and quarantining both youths and adults to be the most effective. The third observation showed that the optimal time for quarantining and antiviral
distribution of the first and second disease outbreaks was between 14 to 21 days, and within 7 days for the third disease.<br><br> Waiting for an actual
pandemic to study is unrealistic, and past pandemic statistics are sparse, so programming a computer simulation might be the most accurate method for
collecting data to facilitate logical medical procedure choices. While a computer program may be flawed for simulating a pandemic, this project demonstrates
that it can be useful for investigating medical procedures to prevent a pandemic from occurring.
2007 - CS305
OPTICAL SHOOTING-RANGE
Viktor Krivak, Kanovsky Tomas
Secondary technical shool Uherske Hradiste, Uherske Hradiste, Czech Republic
Our project has ambitions to develop simulation of shooting range with zero consumption of material. We test many different ways of solve and they have many
advantages and disadvantages. But one has much more positives than negatives. We decided for it after our visit conference of Motion capture. It is way to
load 3D move to computer by looking for a red point in actors wear. We use this technology for self-function.<br><br>Our project includes video camera like a
sensor, special laser gun, data-video projector and computer. When you shoot, capture catches a picture of target and sends it to computer. It separates a print
of light and calculates it is coordinates. These are displayed on special user interface.<br><br>We invent new special algorithm. We separated all process in
three steps:<br><br>a) Try to find one red point<br><br>b) Count red points around first point<br><br>c) Make average from all red points and use that as
shoot coordinates<br><br>When some of this step failed, system waits for another frame from camera and do it again. But when all steps are true system send
values to graphic interface and wait four second. System do not allow double shoot. We also develop a special electronically gun.<br><br>We plain use this
system to diversify education in school or training policeman. So that we must create more different graphical user interfaces. It make possible creates more
modes. For example hunting animals or shoot on moving range.
2007 - CS304
WIRELESS INTEGRATED SECURITY EXTENSION (WISE): A NOVEL APPROACH FOR SECURING SUPERVISORY CONTROL AND DATA
ACQUISITION (SCADA) SYSTEMS AGAINST CYBER ATTACKS
Arooshi Raga Kumar, Smriti Rajita Kumar
duPont Manual High School, Louisville, KY, USA
Supervisory Control and Data Acquisition (SCADA) systems have become more vulnerable to many cyber attacks, such as unauthorized access, modification
of data, and denial of service (DOS) attacks. This is due to the use of the internet as a transport mechanism for sending critical SCADA data and control
messages between the remote terminals units and respective management and monitoring systems. The purpose of this project was to secure SCADA
communication over the internet. In order to protect SCADA systems from vulnerabilities and attacks mentioned above, two significant components were
developed: i) an improved and secure communication protocol along with an intrusion detection algorithm, ii) a wireless Integrated Security Extension (WISE)
for rerouting SCADA traffic in case DOS attack. The integration of these two steps significantly improves security of SCADA communication. The proposed
WISE framework for SCADA systems implements encryption, authentication and traffic rerouting in case of DOS attack. The effectiveness of the overall WISE
system was demonstrated by performing a simulation of SCADA system that uses the Internet for its data and control message exchange. The results
supported the hypothesis that throughput of SCADA packets decreases and the delay in receiving SCADA packets increases as DOS attack is mounted on its
communication path in the internet. The results demonstrated the effectiveness of the WISE framework wherein the performance of the system improved
significantly when SCADA packets were rerouted via WISE designated routers.
Awards won at the 2007 ISEF
Team First Award of $500 for each team member - IEEE Computer Society
Fourth Award of $500 - Team Projects - Presented by Science News
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2007 - CS037
A JAVA-BASED GENETIC ALGORITHM STOCK SCREENING PROGRAM FOR MAXIMIZING RETURN ON INVESTMENT: SHOW ME THE MONEY
Bryan Nathan Landman
Dr. Michael M. Krop Senior High School, Miami, FL, USA
The goal of the project was to determine how stocks can be chosen, based only on the underlying financials of the companies, to produce returns that
consistently beat the Standard & Poor's 500 Stock Market Index.<br><br> Stock screening consists of filtering a pool of stocks with one or more parameters,
based on the financials of the companies. Most stock screening programs rank stocks using “low” and “high” qualifiers to evaluate financial parameters and
come up with a profitable stock portfolio. I decided to consider a “middle” qualifier and to also assign weights to each parameter. With twenty-eight parameters,
there exist 15^28 = 8.52 x 10^32 possible combinations of stock screens; to test these would require currently unattainable computer time and power.<br><br>
I developed a Java program based on a genetic algorithm in which each stock screen was considered a chromosome and each financial parameter a gene.
Through evolutionary processes, the fittest chromosomes, measured by the highest ROI (Return on Investment), typically survived and passed on their genes
until a close to optimal result was reached.<br><br> Using 2004-2005 data to build the stock screening sequences, and then constructing a stock portfolio with
2003-2004 data, the program averaged an ROI of 4.5 % above the Standard & Poor's 500 Index.<br><br> Genetic algorithms could similarly be used used to
predict weather patterns like hurricanes, based on past weather data. Militarily, genetic algorithms could be used to improve troop deployment and movement
in order to minimize costs and casualties by analyzing which of the countless variables were the most important in past operations.
Awards won at the 2007 ISEF
Award of $1,000 - Association for the Advancement of Artificial Intelligence
Scholarship Award of $1,000 - National Collegiate Inventors and Innovators Alliance/The Lemelson Foundation
2007 - CS019
LISTEN TO THE NEXT GENERATION OF BEETHOVEN'S COMPOSITIONS DIGITAL MUSIC GENERATOR USING FRACTAL AND GENETIC
CROSSOVER CONCEPTS
Chi-Chin Lin
National Taichung First Senior High School, Taichung City, Taiwan (R.O.C.)
In this project, we design and implement a novel music generator platform named Composer Imitator (CI) to create pleasuring music automatically. In the CI
platform, we propose the Enhanced Iterated Function System and apply Genetic Crossover Method to create new music. The Imitated Parent Judgment (IPJ)
function is defined to evaluate the pleasure of the music. Initially, the CI randomly selects an initial music note that will be used as the basis to obtain several
new music notes by applying the Enhanced Iterated Function System. The Genetic Crossover Method determines the length of these new notes. We set rules
in the IPJ function, applying the rules, examine the music similarity (that is the relative note pitch) of the generated music notes, and then filter out the bad
music notes. In this project, we run test experiments to study the good music ratio (that is the probability that the generated music is scored as the good music
by the listeners). Our study shows that the CI platform has good good music ratio performance.
2007 - CS030
ACOUSTIC MUSIC SIMILARITY ANALYSIS
David C. Liu
Lynbrook High School, San Jose, California, United States
The popularization of digital music presents the problem of quickly finding music for individual users’ tastes. Collaborative user feedback (e.g. iTunes’ “Listeners
Also Bought…”) is often skewed by popular songs and fails to account for new music.<br><br> This project investigated audio-based music analysis to find
songs similar to given query songs. A new method of improving similarity results using spectral graph theory and the eigenspace transformation has been
presented.<br><br> The statistical characteristics of the frequency distribution were used to capture the perceived texture of the music as signatures. Distances
between song signatures were calculated using the Earth Mover’s Distance (EMD) algorithm.<br><br> These distances were represented as a connected
graph. The eigenspace transformation was used to rearrange the points based on a random walk of this graph. This was a novel approach that separated
songs into distinct groups.<br><br> Playlists of 10 similar songs were generated using each song in the collection as a query. The percentage of songs in the
same genre as the query was defined as the genre matching accuracy.<br><br> A 3-D music navigation system was also developed as a visualization of the
song collection in eigenspace, where similar songs are shown near each other.<br><br> It was found that applying the eigenspace transformation on the EMD
distances used in other research improved genre matching accuracy by 13.5% over the EMD alone, which is statistically significant.<br><br> Music similarity
analysis has the potential to change the way consumers listen to music. This project contributes an algorithm that improves similarity results considerably.
Awards won at the 2007 ISEF
Award of $1,000 - Association for the Advancement of Artificial Intelligence
Honorable Mention Award of $200 - Association for Computing Machinery
Honorable Mention Award - Acoustical Society of America
2007 - CS012
COMPUTER AIDED INSTRUCTION IN THE MODERN CLASSROOM
Raeez Lorgat
Rondebosch Boys' High School, Cape Town, Western Cape, South Africa
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Computers have benefited mankind in many areas, and are still left untapped in many areas. The main aim of this project was to provide a modern, fresh and
exciting environment for education to take place in - utilizing computers as a medium. Although not restricted to, one of the concerns and motivations of this
project was to make this new form of education accessible to the underprivileged, both in South Africa and internationally.<br><br>A program suite was
designed to enhance education in the classroom. VirtuaLAB, a Virtual Reality (VR) laboratory, provides a 3-Dimensional, fully equipped and extensible science
laboratory where the students have the freedom to conduct experiments, learn and experience chemistry 'first hand' (similar to modern video games), all
provided through an affordable desktop solution. Testing, both of the students' and teachers' usage of the program, as well as research into the affordability and
viability of such a proposed environment was also conducted. <br><br>There are many communities where overpopulation, poverty and lack of resources have
made education inaccessible. VirtuaLAB was investigated to determine how it could aid such a community practically. Other benefits, such as those to student's
progress, results and attitude towards their education, were examined.<br><br>In the broader perspective, VirtuaLAB is one implementation of a Computer
Aided Instruction (CAI) environment – using VR to immerse the user in the VirtuaLAB world. The community response alone proved that such new technologies
are not only viable but are also demanded and that endless opportunity waits in this modern technological future.<br><br>
Awards won at the 2007 ISEF
Intel ISEF Best of Category Award of $5,000 for Top First Place Winner - Computer Science - Presented by Intel Foundation
2007 - CS026
TWO-DIMENSIONAL PATTERN IDENTIFICATION BY MOMENT INVARIANTS
Kevin Kong Luo
York High School, Yorktown VA, USA
Moment invariants are a classical tool for feature-based pattern recognition. They were first introduced in pattern recognition by M. K. Hu in 1962, who derived
his seven well-known invariants to translation, rotation, and scaling in two-dimensional patterns. The purpose of this experiment was to develop a computer
program for implementation and approval of M. K. Hu’s theory of moment invariants. The procedures were: 1. Study moment invariants in pattern recognition,
2. Design algorithm for program incorporating moment invariants for pattern recognition, 3. Coding, 4. Debugging, 5. Test program with patterns subject to
translation, rotation, scaling, mirror images and different patterns, and 6. Record and analyze results. The main binary pixel test pattern was designed with
elements of a rectangle, triangle, hole, and curve. The major findings were: 1. An incorrect derivation of the normalized moments in Hu’s paper, and the correct
equation was derived and used in the program; 2. In scaled patterns, Hu’s first invariant function was not invariant but the other six were invariant; 3. Hu’s
functions were invariant for translated and rotated patterns, and mirror images; 4. patterns of differing ratios 6/196, 4/196, 2/196, and 1/196, and the additional
different patterns were successfully distinguished using all seven of Hu’s functions; and 5. Hu’s use of only two invariant functions from seven in his paper’s
sample simulation program is questionable. The inaccuracy of the first invariant function in scaled patterns will be investigated in further studies. The
experiment has applications in the military, security measures, and image processing.
2007 - CS028
FIREWALL
Anibal Francisco Martinez Cortina
Sacred Family Parroquial Institute, Realico, La Pampa, Argentina
Actually, insecurity levels have grown significantly in both real and digital worlds. Each workstation with LAN/Internet access becomes vulnerable against
threads like SPAM, SCAMMING, PHISHING and the most common SPYWARE(who are powerful enough to steal data from infected computers and spread
themselves).<br><br>Attending the need of preserving privacy and system stability, this software was designed to be easy-to-use and yet powerful enough to
safeguard critical information from being steeled or corrupted by any kind of attacks.<br><br>It works as a LPM&PHSD ( Local Port Mapper and Potentially
hazardous Software Detector)<br><br>The project achieves this by checking system tables, reading who called the program, how has it been loaded, and
gathering information about running software, whether is using or not internet bandwidth / access from another PC. It downs not only reads where it's the
program connected to, but it also tracks back the real data stream to determine if the program is being used as a HOST to another hazardous application.<br>
<br>In case of discovery of an un-spected situation(like if the anti-virus failed to detect a spyware) the active scan module will notify the user to ask for
confirmation before taking action, while putting in "pause" the troublesome program.<br><br>
2007 - CS036
THE EFFECTIVENESS OF CHARACTER RECOGNITION USING SELF-MODIFYING MATRICES
Shane Anthony Masih-Das
Suncoast Community High School, Riviera Beach, Florida, United States
The purpose of the experiment was to determine the effectiveness of character recognition by using matrices. When characters were drawn, a bounding box
was found, and the drawing was converted into a 20x20 matrix. Each letter of the alphabet was added in this way. Characters were recognized by computing
the distance between the input vector and each template vector; the template with the lowest distance, the nearest neighbor, won. The letters were drawn five
times each to determine the recognition rate for each of them. The margin between the distance of the recognized character and that of the second-nearest
was recorded for the letter ‘C’. It averaged 0.36. To train the matrices, the difference between corresponding elements of the template and input matrices were
multiplied by a learning rate of 0.2 and added to the element of the template. After training each letter, characters were drawn and recognized again. The
recognition rate increased, and the margin of recognition increased to 1.86, indicating that the recognition for the character ‘C’ became greater relative to the
other characters. Data was taken until the third training, whereupon both rates decreased: letters were sometimes recognized as letters with similar features.
Overall, however, the system worked very effectively, and can be implemented to recognize various characters, providing a way to write in an environment in
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
which a keyboard is not available. Research can be conducted recognizing whole words, independent of orientation and skew. Flourishes, ligatures, and other
features would have to be accounted for.
2007 - CS046
QUANTUM CLUSTER ALGORITHMS: UNSORTED DATABASE SEARCH AND MODULAR EXPONENTIATION
Gabriel Joel Mendoza
Americas High School, El Paso, Texas, United States
The physical realization of a quantum computer is on the verge of moving from a physical problem to an engineering problem, but involves formidable technical
challenges. Quantum cluster computing may be able to provide an answer to these dilemmas.<br><br> Whereas current techniques to realize a quantum
computer rely on mass superpositions of qubits, quantum clustering involves the partitioning of a problem into a series of smaller quantum clusters. In this
study, quantum cluster algorithms for Grover’s unsorted database search algorithm and Shor’s factorization algorithm were formulated. The computational
complexities of these algorithms were compared to the efficiency on classical computers and best-case quantum machines. <br><br> Significant quadratic
speedups for factorization (O(n^3) to O((nlogn^2)^1/2) and database search (O(n^2)) to O((n^2)^(1/2)) were found. Although the speedups are not as
significant as those provided by a best-case quantum computer, quantum error can be more readily corrected on smaller scale quantum clusters. This new
search and factoring methodology has potential for widespread applications in areas such as complexity theory, scheduling, cryptography, bioinformatics,
communications, signal processing, and operations research.<br><br>
Awards won at the 2007 ISEF
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
Tuition Scholarship Award in the amount of $8,000 - Office of Naval Research on behalf of the United States Navy and Marine Corps.
UTC Stock with an approximate value of $2000. - United Technologies Corporation
2007 - CS008
BLOCKING SIGNALS
Alyssa Kathryn Meyers
Martin County High School, Stuart, Florida, United States
The purpose of my experiment was to see which material (metal, plywood, drywall, brick) affected the signal between a wireless laptop and a router the most. I
hypothesized that the metal would do this.<br><br> I connected the laptop and a router without a wire and the router and another computer with an Ethernet
cable. I could then send the signal between the laptop to the router to the computer and back. I then recorded all the data (signal strength, packet loss, ping
time). I did this to each material at 0, 10, 20, and 30 meters. I also used two programs, iStumbler and Ping Utility.<br><br> When the data was put in a table
and graphed, it became clear that the metal, as my hypothesis predicted, affected the signal the most. According to my research this is because the metal
formed its own electrical current, causing the signal sent to be blocked.<br><br>
2007 - CS311
STRATEGIC SUDOKU SOLUTIONS AN ANALYTICAL APPROACH TO THE POPULAR PUZZLE
Charles Robert Meyers, Miles McCullough, Little Rock Central High, Little Rock AR, USA
The purpose of the project was to find an algorithm to solve all logical Sudoku puzzles. This was done by determining the basic rules of Sudoku solution and
applying these rules to a set of puzzles. The rules were analyzed for efficiency and ranked accordingly. This set of ranked rules was used to write an algorithm.
The algorithm was used to write a computer program in C++ computer language. This algorithm was shown to solve all logical Sudoku through both manual
and computer application. The data supported the hypothesis of the creation of a relevant and efficient algorithm.
Awards won at the 2007 ISEF
Team Second Award of $400 for each team member - IEEE Computer Society
2007 - CS013
CONSTRUCTION OF VIRTUAL AXONAL AND DENDRITIC MORPHOLOGY BASED ON PROPERTIES EXTRACTED FROM CLASSES OF NEURONS
Katja Miller
"Christian"-Gymnasium Hermannsburg, Germany
Simulation of biological neuronal networks has become an accepted method in neurobiological research. Still, the problem of getting anatomical data of
neurons and networks, which are needed for a virtual reconstruction, is prevailing. Therefore it is my intention, to show the possibility of constructing random
networks with required properties, even by using reduced data, and thereby replacing a complete reconstruction of single networks. Furthermore the
improvement in ethic terms is essential. <br><br>All data are taken from photos of neurons of rat barrel cortices. This approach requires the identification of
specific morphological properties of neurons, which are segment lengths, branching angles, orientation and spatial density distribution of tips and branches in
axonal and dendritic tree structures. 2D-images of neurons are available for most classes of neurons in numerous nervous systems, and for all images the
positions of branches and tips must be localized relative to the cell body (soma). Images from top- and side-view provide marginal distributions for all three
axes, to approximate a 3D-density-function as product of the discrete values for each position. This density-function is used as gradient for random fractal
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
growth of the tree structure, which is limited by several parameters and the measured orientation. The Monte-Carlo method helps to find the correct parameters
by comparing the discrete distribution of segment lengths and branching angles in 2D-projections of constructed tree structures with original pictures.
Consequently the constructed neurons are supposed to correspond to the biological neurons in all required properties and can be combined to complex
networks.<br><br>
Awards won at the 2007 ISEF
Paid Summer Internship - Agilent Technologies
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2007 - CS034
ROBOT VISION: A MUTUAL ENTROPY-BASED ALGORITHM THROUGH SCENE RECOGNITION FROM IMAGE SEQUENCES FOR TERRESTRIAL AND
PLANETARY EXPLORATION
Lucia Mocz
Mililani High School, Mililani, Hawaii, USA
This work introduces a novel approach for robot vision in terrestrial and planetary environments, primarily for the purpose of autonomous navigation. All-terrain
scenes may change greatly due to variations between the positions of the robot, the environment, and lighting conditions. Although anchored in complex and
powerful algorithms, current technologies are not generally capable of dealing with such variations of the surroundings. In order to alleviate this problem, this
project has developed a robust scene recognition algorithm using an entropy-centric description of the environment. The algorithm, termed Entropy-Based
Visual Perception or EBVP, is based on the computation of mutual entropy, i.e. the amount of information that a scene being observed conveys about the
environment observed previously. EBVP uses maximum mutual entropy matching to identify the position and view direction of the robot in the terrain. The
effectiveness of the new method is demonstrated by a number of simulation and natural terrain results. The recognition rate for East-West and North-South
trajectories and view direction is 96%, 95%, and 97% (median values), respectively. The joint accuracy of both positions is 95%, whereas 94% for all three
parameters together. Extreme changes in lighting conditions have only marginal influence on the results (± 2%) demonstrating that the EBVP algorithm is
particularly well suited for varying environments. Overall the method appears highly competitive or superior to others with regards to performance and
simplicity.
Awards won at the 2007 ISEF
Award of $1,000 - Association for the Advancement of Artificial Intelligence
Third Award of $300 - Association for Computing Machinery
Third Award of $350 - IEEE Computer Society
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2007 - CS043
THREE-DIMENSIONAL SCANNING OF SMALL OBJECTS
Jason Andrew Monschke
Krum High School, Krum, Texas, United States of America
The purpose of this project was to write software and build an apparatus to three-dimensionally scan a small object and produce a digital model of that object.
In order to achieve this, a line-generating laser is projected on an object and a camera takes a picture of the profile that is outlined by the laser. I then wrote a
program named imageprocessor.java to analyze the images line by line and find the horizontal pixel location of the laser on the object. From this, my program
calculates the three-dimensional coordinates of the horizontal pixel location. Next imageprocessor.java writes all the three-dimensional coordinates from all
images to an intermediate text file. The program CreatePlg.java links together all the points from the intermediate text file into polygons. CreatePlg.java then
writes the information for the coordinates and the polygons into a .x 3D file format. The best scan thus far was of a paper cylinder. The size of the digital
cylinder was 17% less than the actual one, however, visually, the shape was very accurate. The only defects in the digital model of the cylinder, besides scaling
errors, were regular rings around the circumference of the cylinder. These were due to errors in the line-finding algorithm in imageprocessor.java. One way to
make the entire process more accurate is to automate the entire process by controlling the object with a stepper motor. I attempted to do this; however, the
stepper motor I was using did not have enough torque to turn the platform.
Awards won at the 2007 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2007 - CS025
CALCULATED RELEVANCE OF INFORMATION THROUGH KEYWORDS
Ukseong Moon
Daegu High School, 187 Daemyung-dong Nam-gu Daegu, South Korea
People usually get the information by the following four steps: (1) find or search the article they need, (2) read the article and find out the main idea from that,
and (3) reorganize ideas from the article and (4) make it as knowledge. As society develops, people improve knowledge storage systems, but people still need
to do those four steps to get the new knowledge. People can find the article through keyword search, but people are not informed what the main idea of the
article is.<br><br>In this work, we make two kinds of semantic networks by using the calculated relevance of information through keywords. The one shows
relevance between keywords and the other shows relevance of information. We have two advantages by using these two semantic networks. First, we provide
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
an efficient summary of the article and can remove steps (2) and (3) from above steps. Second, we guide the user to the correct keywords when they are
searching with the inexact keywords.<br><br>A tool called 'Goocle' provides keywords of articles and relationship among articles. We also show that the user
can cut down their effort to recognizing information and get the correct result efficiently by using our semantic network. <br><br>
Awards won at the 2007 ISEF
First Award of $200 - Patent and Trademark Office Society
2007 - CS032
EFFECTS OF DRIVER BEHAVIOR ON TRAFFIC FLOW
Michael James Morrison
Clear Lake High School, Houston, Texas, USA
In today’s traffic conditions, some drivers are more conservative, while others are more aggressive. The distribution of these different behaviors can possibly
change the overall flow of traffic.<br><br> For this project, a traffic simulator was created, capable of simulating different proportions of aggressive/nonaggressive drivers in real time. The average speed of traffic, number of accidents, and other values were collected and recorded. Tests were run to determine
the impact of: the adjustment of the average driving aggression, one individual outlier, and the uniform behavior of all drivers.<br><br> The test results show
that when all drivers were adjusted to be more aggressive, the average speed, number of lane changes, and number of accidents increased. When one
outlying driver was set to be very aggressive in heavy traffic, he only gained about 5 mph, while increasing his chance of an accident. When all the cars were
set to a uniform aggression, average speed was usually high and steady, as the cars moved together as one unit.<br><br> Different driving styles have a large
impact on traffic flow. Aggressive drivers travel at higher velocities, but wind up with more accidents in the end. Conservative drivers take longer to reach their
destination, but have fewer accidents. Also, aggressive drivers only gain a small advantage in heavy traffic.<br><br> By understanding these effects of driver
behaviors on the overall flow of traffic, drivers could be educated to improve their safety and efficiency in different driving scenarios. This project also illustrates
the pros and cons to uniform driving.<br><br>
Awards won at the 2007 ISEF
Tuition Scholarship of $120,000 - Drexel University
2007 - CS042
DECOMPOSITION OF MEDICAL IMAGING ALGORITHMS TO OPTIMIZE FOR MULTI-CORE COMPUTER ARCHITECTURE
Divya Nag
Mira Loma High School, Sacramento California, USA
A common problem in the medical field is the time taken to process images from CAT scans, MRIs and other such devices. The purpose of this experiment is to
create an optimized medical imaging algorithm which will decrease the time taken to process images. <br><br> Through the use of program parallelism,
computer science techniques were utilized to reduce image analysis time. This novel approach, which encompasses the decomposition of medical imaging
algorithms to optimize for multi-core computer architecture, significantly reduces processing time and can become the essential bridge between life and death.
Modern medical software programs operate on images pixel by pixel, proving to be extremely sequential and inefficient. A parallel program was created to
implement the decomposition of images into rows and columns through the use of threading. Now, instead of going through the image one pixel at a time, the
creation of multiple threads allows the program to process multiple parts of the image simultaneously. Determining the correlation between the number of
software threads to the number of processors allowed for the production of optimal results through the new program.<br><br> The use of these decomposition
techniques will allow medical diagnosis machines to be cost efficient by replacing expensive specialized hardware with off-the-shelf computers. The greatest
advantage observed through this process is the 64% speed up rate in the time taken to analyze medical images; this remarkable discovery has the potential of
being the determining factor between life and death in every day medical emergencies. <br><br>
2007 - CS302
DEVELOPMENT OF A GENERIC SYSTEM FOR ACCURATE HANDWRITING RECOGNITION IN ANY LANGUAGE
Alan Garrett Pierce, Nathaniel John Broussard
School of Science and Technology, Portland, OR, USA
Handwriting recognition is the process of converting handwritten characters to machine-readable text. In nearly any handwriting recognition system, some
compromise must be made between flexibility and accuracy. A common approach is to give the handwriting recognition system previous knowledge of the
details of the language. However, the nature of this solution restricts the system to a fixed language or character set, so it is rare to find commercial handwriting
recognition applications that can be used with unpopular languages. Our ultimate goal is to develop a handwriting recognition computer program that is flexible
enough that it is able to be used with any language, yet powerful enough to achieve an acceptable accuracy.<br><br> Our system is build off of a previous
software application that we have developed for the last two years. Rather than relying solely on samples of handwriting for its knowledge, the new system
allows any user to add a language and specify information such as the writing direction, a complete dictionary, and specific information about how to recognize
each individual character. This feature allows any user to create a handwriting recognition system in his or her own language with relative ease.<br><br>
Although our recognition system does not have as much potential for accuracy as a program specifically designed to work with a single language, it is more
accurate and can be trained faster than many alternate cross-language designs. Our project presents a successful design for a flexible, but powerful
handwriting recognition system.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2007 - CS041
SPOOFING THE SENSORS
Kendra L. Potasiewicz
Poland Central School, Poland NY, United States of America
The main focus of this study was to show that by using open source information and common materials, the average person possess the capabilities to
manufacture a fake fingerprint. This print would be created without the owner of the print actually knowing and would be capable of spoofing an optical
fingerprint sensor. It was originally hypothesized that the wood glue fake fingerprint would not be able to spoof the capacitive sensor due to its more advanced
identifying techniques; however, after extensive experimentation and trial and error this hypothesis was disproved. The wood glue fake fingerprint spoofed the
optical sensor 96% of the time, and the capacitive sensor 97.33% of the time. This study showed the vulnerabilities of these systems, and how simply these
vlunerabilities could be exposed.
Awards won at the 2007 ISEF
Scholarship Award of $20,000 - Department of Homeland Security, University Programs Office
2007 - CS029
FDIS: A FAST FREQUENCY DISTRIBUTION BASED INTERPOLATION SEARCH ALGORITHM
Ram Raghunathan
Sishya, Chennai, Tamil Nadu, India
Search is a fundamental operation in computer science. Binary and Interpolation are two well-known search algorithms to locate an item in a sorted array.
Interpolation performs better (worse) than Binary when data is (not) uniformly distributed. <br><br>This project develops a novel search algorithm, FDIS, that
utilizes frequency distribution of data in the array. The use of frequency distributions narrows down the search range and allows FDIS to exploit Interpolation.
<br><br> FDIS, Binary, and Interpolation search algorithms were implemented in Java. The number of locations searched and the time taken were used as
performance criteria. The array size (N), frequency table size (k), and the shape of distribution of data were used as control variables. 5 levels for each control
were chosen resulting in a total of 125 experimental scenarios. Data were generated randomly from Gamma, Gaussian , and Weibull distributions.<br><br> On
the average, FDIS performed significantly better than both Interpolation and Binary. The worst case performance of FDIS was not significantly different from
that of other two. The average reduction in the number of locations searched (search time) was 82% (15%) <br><br>and 94% (88%) compared to Binary and
Interpolation, respectively. The gap between the performance of FDIS and the other two algorithms was higher when either the array or the frequency table size
increased or the data in the array was more non-uniform The experiment confirmed my hypothesis that the average case performance of FDIS is
O(log(log(N/k)).<br><br>The project demonstrated that FDIS has the potential to significantly improve search in information retrieval applications that require
large databases.<br><br><br>
Awards won at the 2007 ISEF
Second Award of $500 - Association for Computing Machinery
2007 - CS023
CALCUVISION A CALCULATOR FOR THE VISUALLY IMPAIRED
Manuel Alejandro Reyna
James Pace High School, Brownsville TX, US
Calcuvision has the ability to simplify the experience of a child with a visual disability. Children with visual disabilities find it hard to maintain the same pace of
learning as other children. However, Calcuvision isn’t intended solely for children with visual disabilities. Calcuvision should make the users experience more
enjoyable as it will be easier to follow along with others and easy to navigate through the calculator. After inserting the commands, convert the calculator’s GUI
into a more complicated set of buttons allowing the user to have a wider range of functions and options. After completing all functions and formulas for the
calculator, integrate a function on the action script that allows the user to hear the button they are pushing. Therefore, it makes it easier for the child with the
visual disability to interact with the calculator. Calcuvision encountered several errors the first couple of trial runs. The calculator’s functions did not work
properly and provided faulty results. After fixing the functions, the answers were returned correct but the audio feedback gave nonexistent answers. It took time
to rectify the audio problem, but after it was fixed the calculator ran smoothly. Although the calculator is working fine, I believe it still requires a few more weeks
of difficult, solid work. With the results and data gathered from experimentation I firmly think this calculator will simplify and shorten the time a child with a visual
disability will take learning the subject.
2007 - CS052
THE PREINKA CULTURES
Jesus Rojas Minchola
Colegio Militar Gran Mariscal Ramon Castilla . Trujillo, PERU
The educational Peruvian system in these last years has considerated among others innovations the environmental theme and the productive formation which
is framed in the aspiration to reach a permanent development in our country. The questions are how do we do? When do we do? what do we do?. From the
educative institutions?. Our School has a diversified Biohuerto for the kindergarten, Primary and secondary level. It has been working since 14 years ago, so it
makes us to develop our project entitled Biohuerto: Integrated element of education, production and permanent development in Rafael Narváez Cadenillas high
school of the Education Faculty of the National University of Trujillo Peru. Problem: How Biohuerto educative activities framed in a sustainable development,
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
develop environmental and productive skills in the students of Rafael Narvaez Cadenillas High School of the National University of Trujillo - Peru. Hypothesis:
The activities fulfilled in the educative Biohuerto (Bio fruitgarden) framed in a sustainable development, develop environmental and productive skills in the
students of Rafarel Narv{aez Cadenillas High School. The sample population was conformed by 222 students of Rafael Narvaez Cadenillas High School with
who the activities of the educative Biohuerto were made within a frame of a sustainable development, using as a stage the Biohuerto of the secondary level of
our educative institution. The method used in this research was cuasi – experimental and the desing of one group with pre_test and post_test. So, the
educative activities of Biohuerto framed in the sustainable development will be grouped in pedagogies, productive and environmental. The solutions obtained in
the experimental group were submitted from the test of hypothesis Z to a significative level of 5% (a=0.05), so we got the conclution that the educative
Biohuerto activities framed in a sustainable development have significant influence in the development of the environmental and productive skills of the
students who participated in this research.
2007 - CS020
ORGANIZATION OF MULTIBOOT IN WINDOWS NT
Grigoriy Nikolaevich Romanowskiy
College of Est Ukrainian National University named after V.Dal, Lugansk, Ukraine
The multiboot system I’m presenting submits a unique absolutely safe crossplatform superstructure for any operation system loaders. It provides possibility of
booting different operation systems in their own facilities that does not include such procedure. Thereby it avoids any system data editing in change from other
selfstanding loaders that can damage system critical data or even all data on the device. Moreover, it uses only required minimum of system data and does that
dynamically in a way that you nearly should not make any changes in its configuration after changing configuration of any your operation system.<br>
<br>Presented system consists of two parts. The main one is a real-mode code module, executing the boot process and called by a host-loader. The second is
an application that provides graphical user interface, locates installed operation systems, and allows adding them into the host-loader menu. The program is
provided in a single executable file that has no dependences on any third-party software in any way and, furthermore, does not require installation. <br>
<br>The designed system can be used in any purposes connected with using operation systems thereby it is intended for a wide public. Primarily the system
was developed for operating systems not having their own multiboot tools or operation system loaders that conflict each other. The proposed system allows
obtaining these purposes in a proper, that means safe, way.
Awards won at the 2007 ISEF
Honorable Mention Award of $200 - Association for Computing Machinery
2007 - CS021
PHYSICS WORLD
Israel Salazar Chavez
CBTis 168, Aguascalientes, Ags., Mexico
An investigation made in Aguascalientes, Mexico, demonstrates that 58% of high school students find it difficult to understand topics about physics; and we
wonder, is it possible to facilitate the learning process and at the same time, reduce the fail rate on the subject “Physics II”, using a computerized system?<br>
<br>Hypotheses:<br><br>Ha1: “By means of a computerized system it is possible to facilitate the teaching – learning process of Physics for high school
students”<br><br>Ha2: “The fail rate in the subject of Physics is reduced using a tutorial system as a didactical tool”<br><br>Ha3: “The easier it is to learn the
subject of Physics, the lower the fail rate” <br><br><br><br>The processes are: Preliminary investigation, Determine Hardware and Software requirements,
gathering of information about physics topics, Design and general development of the system, testing and adjustments, Preliminary implementation and result
analysis.<br><br>“Physics World” is a computer system developed in Visual Basic, which purpose is to facilitate the learning process of the subject “Physics II”,
with help of an assistant called “Phy” who helps the student interact with a virtual 3D interface with multimedia applications; besides we interact with the system
using a virtual globe (peripheral) to access four buildings were it is possible to study the following topics: Heat and Temperature, Electricity, Field and
Magnetism, Updates and Evaluation.<br><br>Results show that the implementation of “Physics World” increases the easiness of leaning in 33%, and reduces
the fail rate in 30.34%. Therefore, we have evidence to accept the hypotheses Ha1, Ha2 and Ha3<br><br>
Awards won at the 2007 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2007 - CS308
FACTORS, FORCES AND FORECASTING STOCK MARKET MODELING AND SIMULATION
Tyrus LaVelle Sanders, Quinton Smith, Christian Hammond
APS Career Enrichment Center, Albuquerque, New Mexico, United States
Stock traders use technical indicators with stock trading strategies to time when to buy and sell stocks, but investors don’t know which values to use for these
indicators. We believe by using a computer program and genetic algorithms we can optimize stock trading strategies by finding the best parameters. <br><br>
The technical indicators we used in our strategy were short, long and volume moving averages. Our stock trading strategy is based on concepts of the Moving
Average Crossover where the short moving average rises above the long moving average and when there is a sufficient volume increase we will buy the stock
and vice versa. The stocks we used were split adjusted, and dividends were not taken into account. The strategy we used includes two safety measures. The
first safety measure is a trailing stop loss; every time the closing price goes above the previous high closing price, the stop loss is adjusted upward. The trailing
stop loss helps ensure any losses are limited to the stop-loss percentage. The second safety measure is a limit order that is set to a specific percentage above
the buy price. <br><br> Without previous fundamental analysis, we chose six stocks to train our strategy over historical data. With historical stock data we
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
tested the effectiveness of genetic algorithms in the ever changing stock market. Using VB2005 Express we wrote a computer program that implements our
trading strategy and genetic algorithms. The parameters we found were produced through a back-test of historical data through December 2005. We then
applied these values to our randomly selected stocks. After creating a portfolio we invested $10,000 and did live “paper trading” and only invested a maximum
of $3,000 in each stock. In conclusion, the strategy produced a 20% gain while the Standard & Poor 500 Index made a 15% gain in 2006.
Awards won at the 2007 ISEF
Award of $1,000 - Association for the Advancement of Artificial Intelligence
Fourth Award of $500 - Team Projects - Presented by Science News
2007 - CS310
MONDO MENTE
Victor Valdez Saucedo, Stephany Ahlai Flores Mendez
CETIS 80, Aguasclientes, Ags. MEXICO
The Mexican Government is concerned with educational evaluation. The 2005 general grade in the standardized exit exam for Junior High School was 4.3
points out of 10; the general grade of elementary school was 5.0 points. Students have developed cognitive abilities, still there is a lack of significative
transference of knowledge; this was derived in poor development of general abilities. In view of this, we have decided to contribute by designing a program of
pedagogical intervention that will help develop the abilities evaluated in exit exams of the different basic educational levels. Our objectives for 2007 year are: To
promote the development of cognitive, psychomotor and perceptual abilities in the basic education students with or without special educational needs. To
promote the development of auditory abilities with deaf-mute auditory impaired children.<br><br> Our software is divided into three modules, Preschool Mondo,
Elementary School Mondo, and Junior High Mondo. In the analysis and software design, we used three different methodologies engineering software; to verify
the quality of the software requirements and the administration of the project. We have had the chance to try the package in different educational institutions in
Mexico and it has proved effective with children with all kinds of learning needs. Results and conclusions in tests show that students show a greater interest in
learning, through the use of the package, normal students develop the ability to use sign language and students get a faster and more holistic understanding of
curricular contents.<br><br>
2007 - CS048
AI: LEARNING TO NAVIGATE A MAZE WITH REINFORCEMENT LEARNING
Jacob Evan Schwind
MOISD Math/Science/Technology Center, Big Rapids MI, USA
This paper focuses on the performance of a reinforcement learning agent in a maze environment. I hypothesize that if learning is related to reinforcement, then
the agent can solve any type of maze. I test the agent using Q-learning in nine mazes that possess the same dimensions and starting position. The way the
obstacles are laid out identifies the different types of mazes: twists, diagonals, and one maze that consists of open space. I have programmed the
reinforcement learning agent to act on a random policy, updating its Q values every step. Every ten steps a function is called to check the quality of the agent’s
Q values. This test function places the agent in random locations and follows the greatest Q value until it reaches the goal, or until 100 steps have passed. It
does this 1,000 times per trial. The function prints the average number of steps the agent took to reach the goal, and the average reward to a results file. Not
only is my hypothesis supported by the results, the learning curves gave clues as to what kind of maze from which they came. The length of the optimum path
to the goal has the largest effect on the agent’s learning curves.
2007 - CS038
MP3 SWITCHING: TRACKING DATA-INFORMATION DOWNLOADS TO MP3 FILES
Weina Naissa Scott
Dr. Michael M Krop Senior High School, Miami FL, USA
Any programmer can track visitors to PHP files; however, tracking the data-information downloads (a visitors’ browser name, ip address, and date of
download ) to mp3 files is a rather difficult task because it is complex to insert code directly into mp3 files. It was impossible to do so, until this experiment. <br>
<br>The purpose of this experiment was to determine whether one could track and organize downloads to mp3 files hosted at a particular website. <br><br>In
the experiment, a website host supporting PHP was used to write the code. Mod-rewrite, an Apache module that uses "rewrite codes" placed in a .htaccess file
(inside a website directory) that lets programmers control website urls, was used. <br><br>A php file, track.php, tracked the downloads to mp3 files hosted at a
particular website. This php file using the Mysql INSERT Function also inserted these downloads into a Mysql database, which organized this data in an
efficient manner. <br><br>When mod-rewrite was used to tie each mp3 file to a php file that tracked downloads information. Another file, index.php, was used
to display each data-information download to the mp3 files.<br><br>By tracking the ip addresses and browser names of visitors who download at a particular
site, the experiment proves that terrorists and sexual predators can also be tracked when they download files.<br><br>
2007 - CS033
LUCANIDAE ACTION PATTERN SIMULATOR
Yoonji Shin
Cotter High School, Winona, Minnesota, USA
recently announced that insects as the best next generation substitute food, so I compared over 170 different insects suitab
e for growth in Africa nutrit
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
ional values, expected supply, edibility, expected effects on ecosystem if accidentally released, and adaptability to the environment of Africa, and I decided to stu
dy Coleoptera Lucanidae. <br><br> The hyp
s was that I could develop a simulator that will model the condition for raising. Software was developed tha
t will work as avirtual research center.<br><br> LAPS (Lucanida
on Pattern Simulator) is a real-time simulator which calculates the optimum set of five conditi
ons for breeding Lucanidae initially based on the literature data for Lucanidae and modified for the proposed environment entered. <br><br> Special features of
LAPS are: <br><br>1) LAMI algorithm was written to take into account the male to female size difference which determines whether they will breed. <br><br>2)
User friendly interface by 3-level mosaic structure which applied Von Neumann's cellular automata theory to depict the variables and their relative values <br>
<br>3) Evolutionary algorithm and Grid-Computing technology which easily connects all users, collects and enters the data into the simulator. Therefore, the
simulator becomes more accurate as more users become involved.<br><br> Also, in order to display data that contains multi-variables(more than three
variables) more effectively, a new graph called LAMI graph was developed that currently can display eight variables in one graph. <br><br>
2007 - CS010
WHAT KINDS OF COLORS ARE USED IN THE VIRTUAL GAMES AND WHICH WOULD BE BETTER TO BE USED IN DESIGNING THESE VIRTUAL
GAMES?
Gergely Attila Sik
Padanyi Biro Marton Catholic High School, Veszprem, Hungary
Virtual reality games are popular among children and young people all over the world. Children play with the computer games longer and longer every day, and
thus the games have an influence on the aesthetic sense of the children. In this respect the question might be raised: do the designers and programmers pay
proper attention to the colors of the virtual worlds?<br><br>I categorized the most popular virtual games: Action, Adventure, Mystery-, Children's-, Driving &
Racing-, First-person Shooter-, Simulations-, Role-playing-, Strategy-, and Sport Games. I downloaded pictures from the Internet: 7-10 pictures from every
game, altogether 752 pictures. I also downloaded 179 pictures from 20 films. I made measurements in the CIELAB color space. I measured the L*, a*, b*,
values, and determined complexion, sky, water, grass, etc. colors. I compared these with prototypical memory colors and cartoon colors of these objects. My
project quantifies these differences:<br><br>Complexion color: The colors are more yellowish than the memory colors. <br><br>Grass color: Except of two
categories the colors are darker and browner, than the memory colors. <br><br>Sky color: I found in most cases more blue sky colors, but in some cases they
were far from the natural, they were grayish.<br><br>I observed that the designers of the virtual games did not take care of using natural colors. False colors
are used in virtual games! <br><br>I would like to call the attention of the designers of virtual games to use more natural shades of colors.
2007 - CS308
FACTORS, FORCES AND FORECASTING STOCK MARKET MODELING AND SIMULATION
Quinton Bernard Smith, Tyrus Sanders, Christian Hammond
APS Career Enrichment Center, Albuquerque, New Mexico, United States
Genetic algorithms were developed in the 1970’s by John Holland at the University of Michigan. Genetic algorithms are used to solve optimization problems in
chemistry, engineering and other fields that involve math. In today’s stock market it is difficult to decide when to buy and sell stocks. In our project we are using
genetic algorithms to find the best times to buy and sell stock. Genetic algorithms will help us to develop stock trading strategies to maximize profit and reduce
downside risk. Our strategies use two stock technical indicators – price moving averages and trading volume.<br><br> <br><br>The process described in the
previous paragraph is difficult. The main challenge centers on the fact that all stock technical indicators have one or more parameters to determine how they
are calculated. It is very time consuming to manually narrow down these parameters to their best values for a particular stock. <br><br> <br><br>So, how does
one find the best parameter settings to produce the best returns on stock trades? We believe that is possible to find the parameters that indicate when to buy
and sell stocks to receive the best returns by using genetic algorithms. We used Visual Basic Express 2005 to develop a computer program that uses a genetic
algorithm and historical stock price data to develop strategies for timing stock buys and sells.<br><br>
Awards won at the 2007 ISEF
Award of $1,000 - Association for the Advancement of Artificial Intelligence
Fourth Award of $500 - Team Projects - Presented by Science News
2007 - CS015
GENDER DETERMINATION USING FACIAL IMAGES
Jimin Sung
North Monterey County High School, Castroville CA, USA
In the future, humans and computers will interact on a more intimate level. For example, computers could be teachers. Thus, a computer’s ability to determine
gender could become important for implementation of the most efficient teaching curriculum. Can a computer determine the gender of an individual by
analyzing an image of that individual’s face?<br><br>Two publicly available databases were downloaded. The first database was composed of 200 men and
200 women. The second database contained 101 men and 31 women.<br><br>Principle Components Analysis was used to extract significant facial features
from a set of training images. The features from an unknown, unprocessed facial image were compared to the features extracted from the training set. The
gender of the closest match was declared the gender of the unknown subject. <br><br>For the first database, the accuracy rose to approximately 90%. The
maximum accuracy reached for the second database was about 58%. <br><br>Discrepancies of the images in the second database most likely contributed to
the lower accuracy. The images differed in lighting, head orientation, head size, and facial scarring. The lower accuracy could also be attributed to the
unbalanced ratio of men to women in the second database. The images of the first database were all similar in lighting conditions, head orientation, head size,
lack of facial hair, and lack of scarring. <br><br>
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2007 - CS045
NEURAL NETWORK TOPOLOGIES FOR NUMBER RECOGNITION
Thomas Benjamin Thompson
Huron High School, Ann Arbor Michigan, United States
This project explores the performance of some different topologies for neural networks used for number recognition. My goal was to find the most accurate
topology. I rated the topologies using the percentage of testing images correctly identified (accuracy). <br><br>No one has found specific characteristics that
define a good topology. I decided to investigate a total of seven different topologies in three different categories to search for any characteristics associated
with higher accuracy. The categories were fully connected topologies, sub sampling topologies and an overlapping topology.<br><br>One of the characteristics
that I thought might be important was the number of connections. Some of the topologies I tested had a huge number of connections. For example, the threelayer topology had 238,200 connections. Some had very few connections compared to this. The 2 x 2 sub sampling and the 4 x 4 sub sampling had less than
5,000 connections. Another characteristic I examined was the arrangement of connections. This differed between my three categories of topologies.<br><br>In
future experiments, it would be advantageous to consider training speed as well as the number and arrangement of connections. There are many applications
of neural networks (such as robotics and stock market prediction), and in some of these the training speed versus the accuracy would be a big issue.<br><br>
Awards won at the 2007 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2007 - CS022
AIDING THE VISUALLY IMPAIRED THROUGH THE USE OF INTEGRATED COMPUTER SYSTEMS
Theodore Tzanetos
Plainview Old Bethpage John F Kennedy High School, Plainview, New York, USA
This project presents a solution to the issue of improving the quality of life of the visually impaired through the use of an integrated microcontroller. The system
designed to do this is called the Intelli-Trek. <br><br>The main goal of the research revolves around integrating one part of the Intelli-Trek, which is a sonar
detection system, to enable it to range distances. The sonar capabilities of the Intelli-Trek would allow it to detect physical obstructions to the user’s path which
would be imperative in the user’s awareness of his/her surroundings.<br><br>The sonar range finder, called the SRF04, was programmed and integrated into
the Intelli-Trek through the use of the C programming language. The onboard 8051 derivative microcontroller interpreted the data received by the sonar module
to produce a distance measurement. A 16–Bit timer was used to measure time delay in ultrasonic sound pulses. This time measurement was interpreted and
multiplied by the speed of sound to find distances in inches.<br><br>Upon testing the Intelli-Trek was found to be accurate within +-1 inch in some cases.
Ambient air temperatures may have affected the results slightly due to the affect of air temperature on the speed of sound. The maximum distance of the
SRF04 module is 3m. Therefore, the use of sonar as a means of obstruction detection has proven to be a reliable and accurate one. <br><br>
Awards won at the 2007 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2007 - CS309
CLAP(COMPONENTE DE LASER ADAPTADO AL PIE)
Jose Carlos Vega, Lucero Hernandez Rodriguez, Alejandra Guevara Arroyo
CECyTE "College Scientific and Technological Studies" Chignahuapan, Pue, Mexico
CLAP is named Laser Component Adapted at Foot is a new tool so that people with some physical incapacity in the arms, hands or fingers. The object of this
new Mouse is to integrate incapacity people to the work field, where they can produce or develop their professional skills. It was made with fiberglass; this
material is tough and light. CLAP weighs 420 grams. It means the weight of a casual shoe, so, it’s easy to manipulate and ergonomic, also, we made a
software support in order to develop their skills with it. It has games that were reviewed by a rehabilitation center for disabled people. With this Mouse, we have
limited people that can incorporate to the work field using a computer, because everybody has the same rights and responsibilities of their own acts. Also, they
have hopes, dreams and goals so that they can improve themselves. Unfortunately the feelings of this kind of people are linked to the society psychology, so
that they make their disability bigger than it was. For that reason, they don’t have some social relationship, that means they go along without goals in life we
has made some tests on people that go to Rehabilitation Centers, and we got satisfactory results thanks to the contribution of their of the new implementation,
so that people with some physical disability that require functional attendance and new work tools so that they contribute to society.
Awards won at the 2007 ISEF
Second Award of $1,500 - Team Projects - Presented by Science News
2007 - CS306
INTRODUCING HUFOO ALGORITHM AND IMPLEMENTING 802.11 WIRELESS NETWORK SECURITY
Yizhou Wang, Tianyu Zhu, Gaonan Wang
Northeast Yucai School, Shenyang, Liaoning, China
Wireless network security is a common concern for the IT professionals. We've designed a brand new encryption algorithm-the Hufoo Algorithm, based on the
inspiration from ancient Chinese military documents. Hufoo, a tiger-shaped iron-made tally, is a simple and effective instrument used by ancient Chinese
emperors to exercise remote control over troops stationed elsewhere. It was cut into two pieces with random blows, one of which was kept by the emperor and
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
the other by the military officers. An officer dispatched by the emperor to any garrison headquarter was required to present the other half of the tally. The troops
could not comply with his orders unless both the tally fitted together. <br><br>In a bid to transmit data on a safe and convenient basis on a wireless network,
taking advantage of the features of Hufoo, the Hufoo Algorithm simulates the authorization process of Hufoo in data transmission sessions. Without
complicated arithmetic computation, the Hufoo Algorithm is identified as a Dynamic Data Split, which has a low requirement for the computation capability of
ARM-based embedded platforms.<br><br>This algorithm has been applied to the ad-hoc structure; furthermore, infrastructures WLANs with Hufoo-Firmware
Access Points were also investigated. Based on the performance and stability test of extension data under WLAN on campus for several months, we've
concluded that this algorithm is able to reach a speed of 93%-97% in non-encryption environments, which is higher than those of the mainstream WLAN
encryption techniques. Additional methods for further improvement in algorithm's robustness are also discussed.
Awards won at the 2007 ISEF
Third Award of $1,000 - Team Projects - Presented by Science News
2007 - CS044
CREATING A TRANSPARENT AND INTERACTIVE GENETIC PLATFORM
Philip Micheal Weinberg
Spruce Creek High School, Port Orange, Florida, United States
of user-friendly applications utilizing or demonstrating Genetic Algorithms prompted me to create a GA program that could serve as an introduction to the compl
ex world of evolutionary programming, by using the sample problem of determining which is more prone to survival, sexual or asexual reproducing speci
es. In order to accomplish this, a simulation environment was constructed in C#, via the .NET 2.0 Framework, using a variety of tools and custom components,
and around thirty-thousand lines of code. <br><br>Upon execution of the program, and the genesis of two species, one sexual and one asexual, it was
conclusively proven after a number of iterations that sexual reproduction will yield a more fit individual than asexual reproduction. This distinction is greatly
enhanced when a disease is introduced into the population, as it rapidly spreads throughout the population, causing animals lacking resistance to die out. The
visible correlation to the underlying data also reveals the effectiveness of the application in conveying genetic processing in a humane manner.<br>
<br>Overall, the application can be stated to be a reliable platform for basic forays into Genetic Algorithms, and presents an attractive alternative to far more
complicated Genetic Algorithm based programs. Especially when dealing with audiences that lack familiarity with computing in general or the theory of
evolution, this program provides a visual and extremely intuitive representation of reproduction and the clear advantages of sexual versus asexual reproduction.
2007 - CS004
THE HARDWARE AND SOFTWARE FRAMEWORK "ANALYSTIC-EXPERT" FOR ECOLOGICAL MONITORING AND MEDICAL DIAGNOSTICS
Victor Zaytsev
Second School Lyceum, Moscow, Russia
“Analytic-Expert” is a processing platform for measuring of ions concentration or biopotentials as output of sensors for ecology, medicine, etc. To achieve the
scalability, the technology of dynamically loaded extension modules (plug-ins) is used. Input modules are used to communicate with measurement devices, to
receive the data with their verification and error recovery. Output plug-ins provide external device control including alarm switch-off. Data transformation
modules performs mathematical processing of the data measured. They include noise filtering, extreme values monitoring, derivatives calculation and more.
Interface and output plug-ins provide the possibility to present the data in suitable formats (tables, plots, annotated geographic and industrial maps, XML
format, viewable HTML and Microsoft Excel).<br><br>Network plug-ins provide remote control of instruments and web-cameras. Control plug-ins are used to
integrate a set of modules. They describe measurement and control methods. It can be a simple chemical titration, or industrial neutralization of a toxic waste
with an equivalent quantity of suitable reagent. Plug-ins may be written in built-in scripting language, or in any .NET language. The framework is compatible
with a new fast (10-12 persons per hour) potentiometric system of the mammal cancer early diagnostics. Another application is monitoring and geographical
mapping of data received from remote chemical composition sensors. It is used in industrial ecology technologies and fire security control. Using the danger
criteria, the module detects leakage of toxic or explosive gases and generates alarm signals, which is important for industrial security and anti-terror control.
Another module provides the possibility to switch on/off different devices in case of emergency. The program complex is open (provides the possibility to attach
instruments of other types).<br><br>
2007 - CS002
THE QUANTUM GRAPH COLORING ORACLE
Franklin R. Zhao
Westview High School, Portland, Oregon, United States of America
I used the concepts of quantum computing to solve graph coloring problems. Concepts of quantum computing, such as quantum superposition, quantum
parallelism, and quantum entanglement allow in general the solving of problems several orders of magnitude faster than classical computers. In the case of my
algorithm, the speedup is quadratic. The use of quantum computing concepts in a relatively easy to solve problem such as graph coloring is an excellent way to
explain the properties and efficiencies of a quantum circuit. <br><br> This project is meant to illustrate the basic concepts of quantum computing as well as
detailed analysis of the quantum oracle for graph coloring. The structure of the oracle is derived through Karnaugh maps which represent function, and the
subsequent use of logic synthesis to create the circuit from elementary quantum gates such as the Toffoli and Feynman. The algorithm used to implement this
is Grover’s Algorithm, which accommodates the oracle through the variable oracle block in the algorithm. <br><br> The simulation of this circuit was achieved
in MATLAB. I did this through the derivation of the matrices of each block of the algorithm, which is composed of my oracle and the generics of Grover’s
Algorithm. My simulation confirmed the compatibility of my oracle with the algorithm, as well as the ability for the algorithm to solve the problem.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2008 - CS049
BE SAFE WITH BETTER RULES FOR MOD_SECURITY
Osamah Sulaiman AlHenaki
Abu Omar AlBasri High School, Riyadh, Central, SAUDI ARABIA
This project is about protecting websites hosting servers from unauthorized entries and hacking activities by writing certain defined rules using
MOD_SECURITY open-source web application Firewall.<br><br>The defined rules are easy to implement by the public (inexperienced computer users).
Therefore, web server owners could save money and time by using the proposed MOD_SECURITY rules. <br><br>The defined rules are short compared to
traditional firewall settings. This helps in optimizing web server performance.<br><br>In addition, the rules are concise and focused on unwanted requests and
suspicious hacking attempts, unlike traditional firewall settings that are broad and over protective which results in preventing otherwise authorized activates. For
example, sending e-mails using words like "Cookies" can result in blocking innocent e-mail sending. Also, naming system files with C99.php can get them
blocked as they resemble C99 hacking shell.<br><br>The recommended rules also protect against encryption and transfer coding. <br><br>In conclusion,
using well-defined, optimized & easy to implement MOD_SECURITY rules is an effective way for inexperienced owners of website hosting servers to save
money and protect against unwanted hacking attempts while not preventing innocent requests due to over protecting rules.<br><br>
2008 - CS024
SPEECH RECOGNITION FOR THE DAUGBOT
Amin Kamal Ali
Oak Grove High School, Hattiesburg, MS
Soldiers around the world depend increasingly on robots to help them perform their missions. And even with the development of semi-autonomous robots, it is
still necessary for the Soldier to input commands to the robot through keypads or controllers. Adapting speech recognition for robotic control may allow Soldiers
to communicate with the robots without using his or her hands, thus allowing him or her to keep their hands free for other tasks. However, high noise levels in
the battlefield or in army vehicles sometimes makes it difficult for the speech recognition system to recognize Soldier commands.<br><br>The goal of this
project was to create a speech grammar for the Dismount’s Assistant Unmanned Ground 'BOT (DAUGBOT), a semi-autonomous robot designed at the U.S.
Army Research Laboratory (ARL) that can perform visual searches for friendly and enemy forces. A Verbex automatic speech recognizer was used to write a
DAUGBOT speech grammar with over 100 words and phrases. An experiment was then conducted to determine the noise limits in which the speech
recognizer could function. The grammar was trained to the speaker’s voice, at 65 A-weighted decibels (dBA). The recognizer was then tested at three different
levels to determine how efficiently it worked. The results indicated that the Verbex provides a 96% correct recognition at 84 dBA, 20 dBA above that at which it
had been trained. It is hypothesized that training the speech recognizer at higher noise levels will most likely improve recognizer performance in battlefield and
tank noise conditions.
2008 - CS016
ULTIMATE CONTROL
Muhammad Awais
Pak Turk International Schools and Colleges, Peshawar, Peshawar, N.W.F.P., PAKISTAN
This project called 'Ultimate Control' is aimed to be an easy way of doing many things with computer that we do manually.<br><br>In this project some daily life
components have been operated without any contact with them, but with the help of a computer. This device has reduced any possibility of getting electrical
shocks, as the peripherals of the computer use are just 5-12 volts. With the help of the web cam, the project is made fully automatic.<br><br>The project helps
many people who are sensitive to electric appliances while switching them “on” and “off” (bulbs, fans, refrigerators etc). Hence, by making use of this project we
can easily control all the appliances with just a single click or plugging a web cam.<br><br>The main link used to the computer is the “parallel port” which
connects the “relay system circuit” to the PC and enables the computer parser commands to go through the parallel port into the relay circuit. <br><br>On the
other side this whole process is carried in the background of the desktop, meaning there is no need of any program to open, and this is done by the help of two
languages, namely C++ and PASCAL. In short, the project handles the whole “house electrical system”.<br><br>Mobiles or other utilities can also be used to
make this project far much better. Also this project is very flexible because computer has a capability of holding many plug and play devices like cameras,
mobile phones etc.
2008 - CS019
THINK-THEN-EAT OR EAT-THEN-THINK: COMPARING PROGRAM STRATEGIES
Stephen Daniel Ball
Northern Nevada Homeschool, Incline Village, NV
My project presents the programming of Nanorgs, virtual nanobots that live in a virtual barrel of sludge. The sludge, some of which is toxic, is converted to
energy and deposited at designated collection points. This project is based on a Symantec ™ competition programming hypothetical nanobots via task-specific
machine-language code.<br><br>Although the competition objective was to maximize sludge deposition, this project explored advantages of two specific
collection strategies. One tactic has Nanorgs identify dangerous sludge and avoid it while the other saves precious ticks by having the Nanorgs eat any sludge
they encounter. My question was, “Which of the two tactics will yield more energy in a given amount of time or ‘ticks’?” I hypothesized that the Nanorgs that
consume sludge without checking for toxicity will process more sludge initially than those that check sludge to avoid damage but those that do not check will fall
behind as they succumb to toxins.<br><br>The scores of the trials run were consistent regardless of the initialization value, or “seed.” Also, the Nanorgs that
did not check the sludge did, on average, better up to the 7,000th tick on the seeds chosen. I can conclude that the seed has no significant bearing on the
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
outcome and that the non-checkers stay ahead of the checkers for a small but significant number of ticks. This simulation may have future application in
designing real world nanobots for such functions as treating waste water and toxic spills or harvesting specific molecules from soil or rocks.
2008 - CS312
SECURE STORAGE IN A PEER TO PEER ENVIRONMENT
Osbert Bastani, Sameer Deshpande, Arun Venkatraman
Texas Academy of Mathematics and Science, Denton, TX
A secure peer-to-peer storage network requires both routing and secret sharing algorithms with low complexity. Routing algorithms must ensure that any node
can efficiently identify the location of any other node. Secret sharing algorithms, which must also have low complexity, are a subclass of encryption algorithms
that distributes the plaintext among n keys, or more specifically, shares, at least t<=n of which must be obtained to reconstruct the data. In this project, a
simulation engine for testing the computational time and memory space of peer-to-peer secure storage algorithms was developed.<br><br>Shamir’s Secret
Sharing Algorithm, which additionally works as a secure computations algorithm, was utilized to encrypt the data. This method uses random degree t
polynomials with y-intercepts equal to the numerical value of the plaintext to encrypt the data. Then, n points (x!=0,y) from this polynomial are then distributed
across as shares across nodes, where x is known and y is the actual share.<br><br>The routing algorithms researched include Gnutella’s method and the
Chord and Pastry algorithms, which store partial routing tables at each node, along with the One Hop method, which stores complete routing tables on certain
servers, and the Napster method, which uses a central server.<br><br>The Napster method proved to be the most efficient, whereas the Gnutella’s method
was the least efficient, especially with a large number of nodes. Our simulation program was effective at realistically simulating the behavior of peer-to-peer
networks and can be used to test other algorithms for secure peer-to-peer storage.
2008 - CS004
RETRIEVING REGIONS-OF-INTEREST FROM VIDEOS ON THE FLY WITHOUT TRANSCODING
Timothy Bauman
Central Catholic High School, Portland, OR
rpose of this project was to find or develop an efficient method of extracting regions-of-interest of a compressed video. The desired result was a method that
took little time to extract the region-of-interest and did not worsen the video’s file size and quality significantly.<br><br>The obvious and commonly-used method
is to extract the video, crop each frame as desired, and re-encode the video. This has a minimum of excess data, but can take an unreasonable amount of time
and result in a loss of video quality.<br><br>The ideal method is simply to isolate the region-of-interest in the compressed video. However, this is impossible in
most situations because the region would rely on data from other parts of the video to decompress.<br><br>I took a novel approach to this problem. Instead of
focusing on the extraction process, I focused on the process of compressing the initial video. I created an intelligent algorithm which compresses the video with
a grid of segments which can each stand alone. These segments can then be extracted in combination to form any conceivable region-of-interest.<br>
<br>Experimentation confirmed that this method is far faster than the alternatives, the resulting files are only marginally larger, and there is no noticeable quality
loss.<br><br>This method has been developed to nearly practical usability and requires only small modifications to certain pre-existing code to be used in a
real-world environment.
Awards won at the 2008 ISEF
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
2008 - CS005
APPLICATION OF MACHINE LEARNING METHODS TO MEDICAL DIAGNOSIS
Michael S. Cherkassky
Edina High School, Edina, MN
This project is concerned with application of machine learning methods to classification and diagnosis of medical data. Machine learning based computer aided
diagnostics (CAD) have been of growing interest in biomedical applications. Such methods are very helpful in providing a secondary opinion to a clinician
especially in mild cases of disease. In most CAD applications, a predictive diagnostic model is first estimated from the historical (or training) data, and then it
used for diagnosis of future patients. The goal of this project is to investigate application of several machine learning methods to real-life medical data sets, in
order to understand the generalization (or prediction) capability of the estimated models. Arguably, many CAD methods can provide diagnostic capability similar
to (or better than) human doctors, in situations when the number of input variables (used for diagnosis) is large. This project investigates and compares
diagnostic accuracy of two learning methods for classification: k-nearest neighbors and support vector machine (SVM) classifiers. These methods are applied
to 3 publicly available data sets: Haberman’s Breast Cancer, Statlog Heart Disease, and Wisconsin Breast Cancer. Resampling techniques have been used for
tuning parameters of each learning methods, and for estimating their prediction accuracy. Our results show that (a) accurate medical CAD models can be
indeed obtained using machine learning methods; (b) SVM consistently outperforms k-nearest neighbors classifier, and (c) proper preprocessing and scaling of
the data is critical for obtaining accurate predictive models.
Awards won at the 2008 ISEF
Second Award of $500 - American Statistical Association
2008 - CS040
DO YOU 'EAR WHA' I 'EAR? REDIGITIZING VOICE SIGNALS INTO LOWER FREQUENCIES TO REVOLUTIONIZE HEARING ASSISTANCE
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
TECHNOLOGY
Nicholas Mycroft Christensen
Wetumpka High School, Wetumpka, AL
After researching hearing, voice production and digitization of sound waves, I wrote a computer program that redigitizes .wav files of voices into lower
frequencies that can be better understood by hearing-impaired people. The program analyzes the .wav samples to determine the beginnings and ends of wave
cycles, then counts the cycles, omitting one cycle intermittently as directed. It then interpolates numbers to resample the .wav at a lower frequency for the 44.1
kHz standard audio sampling within a set time frame.<br><br>I recorded low and high male and female voices, speaking similar-sounding words with many
voiceless phonemes and processed them at five frequencies, from 12.5% to 25% lower than the originals. For a test, I mixed forty-eight of the recorded words
randomly from different frequencies and asked normal and hearing-impaired people to circle which of three similar-sounding words they heard. I also recorded
a male and female speaking sentences with many voiceless phonemes, which I processed in two different frequencies. In that test I asked which was easier to
understand.<br><br>The test results from 120 people show that most prefer the normal voice range; however, those with documented or possible hearing loss
better understood the words processed into lower frequencies. On average, they missed 8% fewer words, with individual improvement up to 65%, at the lower
frequency than at the original. This program shows promise for hearing assistance and may also have application in radio and telephonic communication where
sound quality may be lost for anyone.
Awards won at the 2008 ISEF
Second Award of $500, in addition the student's school will be awarded $200, and the student's mentor will be awarded $100 - Acoustical Society of America
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2008 - CS018
AN OPTIMIZATION ALGORITHM FOR SPACE MISSION DESIGN: DYNAMICALLY SIMULATING ENERGY-EFFICIENT TRAJECTORIES
Erika Alden DeBenedictis
Saint Pius X High School, Albuquerque, NM
Introduction<br><br>This project focuses on the development of a software system that plans, optimizes, and simulates spacecraft trajectories. Its end goal is a
dynamic space mission design system to automatically construct spacecraft trajectories between one planet and another with minimal energy expenditure by
using the gravity of other planets as a form of propulsion. <br><br>Problem Statement<br><br>How can energy efficient spacecraft paths be automatically
planned and effectively simulated?<br><br>Method and Results<br><br>This project uses a computer program developed by the researcher that incorporates
both simulation and automatic data analysis for computerized space mission planning. The program refines the initial positions of spacecraft so their paths
meet user specified requirements. This is accomplished by tracking the spacecraft’s crossing of goal regions and using this data to direct the next iteration of
refinement. This method allows the user to automatically find an accurate path for a spacecraft with specified itinerary. The use of mathematical concepts
underlying gravitational physics help these paths be energy efficient. The software runs on multi-core processors and has been analyzed for scalability.<br>
<br>Conclusion<br><br>The current program has individually simulated launching into an L1 halo orbit, automatically transferring into an L2 halo orbit, and
reaching both near and distant planets with gravity assist from intermediate planets. These trajectory segments may be connected with an energy-gaining
transfer pattern which allows the spacecraft to achieve more distant destinations. The program makes effective use of 2- and 4-core processors and should
show further performance increases with more cores.
Awards won at the 2008 ISEF
First Award of $1,000 - IEEE Computer Society
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
Third Award $150 - Patent and Trademark Office Society
First Award of $3,000 - United States Air Force
2008 - CS017
CREATING A BETTER TRANSPARENT ENCRYPTION DEVICE
John Wilson Dorminy
Sola Fide Homeschool, McDonough, GA
Many products purport to keep data secure on external hard drives and other portable storage devices, enabling companies and institutions to control
dissemination of sensitive data. However, current products suffer from several flaws affecting portability. Most depend on host-installed software. Some of
these software solutions permit a "subliminal channel", where data can be hidden, but these are external additions to the underlying cryptographic algorithms.
Other products couple storage and encryption chips, but these do not permit reuse of existing unencrypted storage for encrypted data.<br><br> <br><br>
<br>My solution offloads software encryption onto dedicated hardware, separating encryption, authentication, and storage. The device requires only basic
installation and configuration, integrates a secure subliminal channel, and recycles unencrypted storage. Like a USB hub, the device fits between a computer
and a hard drive. When connected and authenticated, the encrypted storage data and the subliminal channel data appear unencrypted to the host computer, no
different than an ordinary external drive.<br><br> <br><br><br>My device provides 128-bit security and 10MB/s read/write speed, comparable to current
encryption products. It has three advantages over existing products: it can encrypt existing hard drives, provide seamless encryption with minimal configuration,
and access a subliminal channel easily. These advantages make this solution superior to existing products in multiple areas.
Awards won at the 2008 ISEF
Scholarship Award of $1,500 per year for four years - Kennesaw State University - College of Science and Mathematics
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2008 - CS027
MINDMOUSE: A MIND COMPUTER INTERFACE WITHOUT PATTERN RECOGNITION BASED ON BIOFEEDBACK TRAINING
Nitin Kitchley Egbert
Bellarmine College Preparatory, San Jose, CA
This project explores the feasibility and practicality of non invasive mind-computer interfaces in which the computer does not attempt to figure out one’s
thoughts. I hypothesized that such a device would be practical for everyday use.<br><br> I built an EEG that acquires two signals, one from either side of the
forehead, and then translates these signals into mouse movement using a very simple algorithm that does not involve pattern matching. People use the device
by conditioning themselves to produce a certain kind of signal when they want the cursor to move in a certain direction. I programmed in a set of training modes
that teaches one how to move the cursor in one direction at a time. It timed each attempt to move the cursor across the screen, enabling measurement of
progress. Most people attain fairly precise control within 40 minutes.<br><br> People learn quickly when training with the device. The first time most people try
to move the cursor, it takes about 40 seconds to move it across the screen. After practicing, it only takes about 2 seconds to move it across the screen. The
best learners could select 3 objects about the size of a desktop icon in one minute. This shows incredible promise.<br><br> It is fairly clear that anyone could
learn how to use this. The practical applications of this device are enormous. Quadriplegics could use the device to perform tasks that they can’t do otherwise.
Similar devices would also be useful to people who control complex machinery.
Awards won at the 2008 ISEF
Paid Summer Internship - Agilent Technologies
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
Second Award of $200 - Patent and Trademark Office Society
2008 - CS045
PATHFINDER: A JAVA BASED PROGRAM TO OPTIMIZE DISPERSION OF MULTIPLE ITEMS THROUGH ANY FIELD OF TRAVEL USING A GENETIC
ALGORITHM
Jeremy David Fein
Dr Michael M. Krop Senior High School, Miami, FL
The purpose of this project is to apply graph theory representation into a genetic algorithm in order to optimize paths of items based on how close the user
wants them to be to each other during travel. In Pathfinder, the graph represents any number of items traveling through a field, with each available point
represented as a vertex. Initially, a large population of graphs is made, each containing completely random paths for every item. The fitness, how well a graph
solves the problem presented by the user, is analyzed for every graph through calculating proximity (how close items traveling are to each other during travel)
against a user-desired value, density (number of items over area of items during traveling) against a user-desired value, and total distance covered by all items.
Using fitness as a ranking for graphs, the top ones are selected and mated, creating children graphs, creating a new generation that’s generally 30% smaller
than the previous. Ultimately, with increasingly smaller generations, Pathfinder returns one graph that is mated and mutated to be the best graph to solve the
initial problem. This paper describes detailed method analysis for Pathfinder while analyzing its effectiveness and originality. Applications for Pathfinder include
troop deployment in military, where each troop’s path is mapped based on its closeness to other troops, and pathing of robots, where the initial problem is
constantly updated through sensors and the program re-run to make optimal paths for traveling machines.
Awards won at the 2008 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2008 - CS311
EXTENSIBLE ENVIRONMENT FOR CELLULAR AUTOMATA DESIGN
Michael V. Figurnov, Ruslan N. Bakeev,
Lyceum of Information Technologies # 1533, Moscow, RUSSIA
ar automata (CA) based models cover phenomena that range from nanoscale physics to geosciences. A common environment has been developed that allow
s integrating various CA procedures retaining for researchers the most important features of such models – flexibility and open-endedness. <br><br>This
environment offers C# functionality for users in three pre-defined model domains – rule design, visualization and post-processing (analysis). After writing a few
lines of C# code model integration is performed without user intervention in a coherent manner. Researchers can choose CA type (1D or 2D) and use
population libraries as well as stored procedures presenting CA rules and analysis methods; wizard technique is also offered. Researcher’s key asset, time, is
thus released.<br><br>The extensible CA design environment was successfully used to model self-replicating automata and some 1D structures. Presently our
system is used in mathematical geophysics. A body deformed by frictional sliding is investigated. Sliding boundary behavior under constant deformation force
for such bodies (taking into account some recovery effects) can be modeled as CA pattern. Entering certain parameters in this model resulted in an
evolutionary process that can be interpreted as earthquake development phase. Several procedures needed to analyze lengthy model runs were easily added.
<br><br>This application was developed using Microsoft Visual Studio 2005 / 2008 C# 2.0. DotNetMagic2005 library was also used to create customizable GUI.
Managed DirectX functionality was used to boost CA rendering performance. ‘Bottleneck’ code fragments were discovered and hand-optimized using ANTS
Profiler.<br><br>The system is intended for both research and educational use.
Awards won at the 2008 ISEF
Third Award of $1,000 - Team Projects - Presented by Science News
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2008 - CS011
QUANTUM MECHANICS SIMULATIONS AND QUANTUM CRYPTOGRAPHY
Martin Furman
Gymnazium Armadneho Generala Ludvika Svobodu, Humenne, SLOVAKIA
Nowadays, the Internet is becoming the place which allows us to do things simpler and it makes our lives easier. However, the Internet as well as almost every
element of our world has its dark side - it is misused to gain the variety of illegal advantages.<br><br> This project aims to present the new way of
communication called Quantum Cryptography. This kind of communication can help us solve many problems, especially those with origins in weaknesses of
cryptographic algorithms.<br><br> Before the beginning of the developing process, we have stated three basic presumptions and have kept them all the way till
the end. As the Internet is the place of different operating systems, we have opted for multiplatformity of all programs to be the first presumptions. For this
purpose we have chosen the Java programming language.<br><br> The second presumption was not to make any compromise when developing simulation
engine i.e. make it pretend it's the real experiment. We implemented there a sort of wave functions and certain quantum mechanical states of atoms (e.g.
special state of two entangled atoms). In simulation engine, there was also implemented so-called Stern-Gerlach analyzers as the measuring devices.<br><br>
Finally, the third presumption can be described as the possibility of usage of our programs with real experiments when new technologies make it possible to
perform them without enormous financial expenditures. To reach this goal we have adapted our programs to the LAN environment. Thus, it will be possible to
replace simulation experiments with real ones.
2008 - CS025
FACIAL VALIDATION SOFTWARE THROUGH QUANTITATIVE FEATURE ANALYSIS
Malcolm William Greaves
University School of Milwaukee, Milwaukee, WI
Whether it is campus safety or a matter of national security, proper identification of people is paramount to any security system. Most current systems rely on
methods such as passwords or ID cards. Although these methods of user verification are, in theory, robust and reliable, in reality, they are susceptible to
corruption and unwanted user access. The fundamental flaw with current identification systems are that they rely on having users be separate from the
information that describes them. In trying to solve this fundamental security problem, I decided to pursue a new avenue in proper identification; identifying
people solely using their faces. In order to solve this problem, I created a computer program that identifies people based upon the color distributions of
particular areas of their faces. These areas, known as “FacialFeatures,” quantitatively represent one’s face through histogram representation of the color
distributions and statistical analysis of this data. The program compares the quantitative data in a particular FacialFeature of one person’s face to the
correlating FacialFeature area in another person’s face. The program then generates a probability that these features are the same by comparing the
equivalency of the color distribution data. The result of implementing this original technique in a computer program has proven to be an effective method for
facial identification. The program has a high correlation for correctly accepting a particular face as well as for correctly denying an incorrect face during
validation. These results suggest that this method of identification is accurate and reliable, with the promise of being integrated in a robust security system.
2008 - CS012
THE TIME RELATED AND HMM-BASED EMOTION RECOGNITION FROM FACIAL EXPRESSION
Hu He
The High School Affiliated to Renmin University of China, Beijing, CHINA
Emotions play an important role in human communication, and emotion recognition is a hot topic in Human-Computer Interaction research. Facial Expression,
the embodiment of intelligence, acting as the principle conveyable approach of emotion, is the most expressive way humans display emotions. Therefore,
techniques of Facial Expression Recognition£¬ aiming at promoting the friendliness of Human-Computer Interaction£¬are attached with increasingly greater
importance. <br><br> However, most of the current emotion recognition systems based on facial expression only use static pictures or discrete frames from
videos. As we all known, emotion expression is a dynamic process in time sequence. Without the time information, the accuracy of emotion recognition¡¯s
results is limited. To solve this problem, we build a system by using a Hidden Markov Model(HMM) which can automatically recognize the emotions from facial
expression via live video. Time related parameters are the important features used in our model. <br><br> During the training procedure, we use a large
dynamic 3D facial expression database captured by a Motion Capture System for the model training. It has been proved by experiments that a high recognition
accuracy can be achieved with our systems.<br><br> As the 3D facial expression is not easy to be captured in real applications, we map the 3D data to 2D
data by discarding the extra z-axis of the 3D data. With this idea and the above-mentioned database, we finally build a real-time running system which can
accept the live video as input. Experiment on live video shows promising recognition accuracy of our method.
Awards won at the 2008 ISEF
Paid Summer Internship - Agilent Technologies
2008 - CS001
EXAMINATION ANALYSIS
Briland Hitaj
Mehmet Akif Boys College-Tirana, Tirana, ALBANIA
Most of the teachers get lost in too many papers that they have to check and they have to apply tens of exams for hundreds of students. For example: Let’s
suppose that a teacher who has 20-hours lesson per week. He gives lesson to 10 different classes. Normally each class has to have a minimum of 2.exams per
term. The teacher will have a total of 40 exams for hundreds of students that would make a vast amount of papers for him to analyze. <br><br>Now back to our
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
main problem: from one side we have hundreds of exam papers which expect to be evaluated by us. From the other side, we can have all the data in front of us
and we can easily calculate, analyze and produce results using a computerized system.<br><br>After talking to teachers, they advised me to prepare a
program that can check “multiple choice” exams faster and easier. Why “multiple choices” you might ask? Because multiple choice examinations are more
efficient, fast, they are easy to check and the results are easy to analyze. They are also easy to be prepared and to measure the level of students. Also the
teachers advised me to prepare in such a way that would be possible to enter all the students’ exam results.<br><br>After all discussions with teachers, we
come to the point that the following analyses are useful for exam evaluations: <br><br>1.The analysis of every question: What percentage of examinees
answered this question correctly?<br><br>2.Analysis of every student from different subjects.<br><br>3.Analysis of subjects with subcategories for students.
<br><br>Now we are living in the age of technology and we don’t have to use anymore huge amounts of papers which would take huge amount of space in our
shelves. Now we can simply use the technology to solve our problems that we face in our daily lives. Examination Analysis program is developed for this
purpose to facilitate data entry for exams into computer and has tools to analyze the exam results easily and fast.
2008 - CS007
DEVELOPMENT OF A DYNAMIC VIRTUAL PHYSICS ENVIRONMENT
Edward Taylor Kaszubski
Lake Highland Preparatory School, Orlando, FL
In this project, the purpose was to create a basic 2D physics environment and attempt to observe various natural phenomena created by the physics
environment. The targeted development environment was Adobe’s Flash CS3 authoring tool using the newly updated ActionScript 3.0, the specialized objectoriented programming language created for use in programming within Flash. First, a variety of classes were created to control various aspects of the simulation
from the interface to the physical calculations to animations and graphics. In all, 34 custom classes were created and the only other classes used were those
available as a standard to the Flash development tool. Nearly all physical calculations were based on Newtonian physics relating to force, the calculation and
transfer of kinetic energy, the calculation, conservation, and transfer of momentum, and the calculation of acceleration due to gravity. The main code which
controlled the simulation was developed into an object of type PhysSimUtil for easy integration into custom environments which could be built around the 34
classes which make up the framework of the environment. Six demonstration applications were created, each with the goal of replicating a specific
phenomenon. In the end, all six demonstration applications successfully achieved their respective goals.
Awards won at the 2008 ISEF
Tuition Scholarship of $120,000 - Drexel University
2008 - CS307
AGENT-BASED CROWD MODELING: A GAME THEORETIC APPROACH
Adam Scott Kidder, Nicholas Triantafillou,
Saginaw Arts and Sciences Academy, Saginaw, MI
Each year, thousands of people die of asphyxiation in crowd panics caused by fires or riots in buildings designed without a true understanding of pedestrian
dynamics. For this reason, pedestrian crowds have been studied empirically for over 35 years. Still, the two predominant varieties of crowd models, the social
force and cellular automaton models, have proven ineffective in predicting holistic crowd tendencies, when applied to diverse situations. This project presents a
novel game theoretic crowd model in order to improve understanding of macroscopic crowd characteristics through microsimulation in a variety of crowd
situations.<br><br>The game theoretic model presented in this project operates on the principle that each step pedestrian's move is probabilistically chosen
based primarily on three influencing factors: an individual's self-motivation to move towards a goal location, to avoid collisions with walls and other individuals,
and to maintain personal space. These subconscious desires are assumed to operate through a subconscious scoring system. By choosing the point with a
high score within the agent's achievable range of motion, the next step reacts to the balance of influences in the most ideal manner possible.<br><br>Overall,
the game theoretic model developed in this study shows promise for effective microsimulation of crowds and prediction of macroscopic trends. This approach
avoids common pitfalls of existing social-force and cellular automaton models while presenting opportunities for future research and optimization. It should
prove useful in fields where crowd models are already used – namely those of escape simulation and panic situations.
2008 - CS022
REAL-TIME WATER WAVE SIMULATION WITH SURFACE ADVECTION
Dongyoung Kim
Korean Minjok Leadership Academy, Gangwon, SOUTH KOREA
sent a real-time physical simulation model of water surface with a novel algorithm to represent the water mass flow in full three dimensions. The embod
iment of general surface advection is an important part of our physics system; however it is an aspect commonly neglected in previous models of water surfa
ce, mainly due to computational cost reasons. The physical state of the surface is represented by a set of two dimensional fields of physical values, inclu
ding height, velocity, and the gradient. The evolution of the velocity field is handled by a velocity solver based on the Navier Stokes Equations. We integ
rate the principle of the mass conservation in a fluid of equilateral density to update the height field from the unevenness of the velocity propagation, which in ma
thematical terms can be represented by the divergence operator. Thus the model generates waves induced by horizontal velocity, offering a simulation that
puts force added in all direction into account when calculating the values for height and velocity for the next frame. Other effects such as reflection off the
boundaries, and interactions with floating objects are involved in the algorithm. The implementation demonstrates to run with fast algorithm speed scalable to
real-time rates even for large simulation domains. Therefore, our model is appropriate for a real-time and large scale water surface simulation into which the
animator wishes to visualize the global fluid flow as a main emphasis.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Awards won at the 2008 ISEF
Second Award of $500 - Association for Computing Machinery
Intel ISEF Best of Category Award of $5,000 for Top First Place Winner - Computer Science - Presented by Intel Foundation
The SIYSS is a multi-disciplinary seminar highlighting some of the most remarkable achievements by young scientists from around the world. The students
have the opportunity to visit scientific institutes, attend the Nobel lectures and press conferences, learn more about Sweden and experience the extravagance
of the Nobel festivities. Valid passport required for travel. - Seaborg SIYSS Award
2008 - CS010
MODELING EVOLUTION: EXPLORING COMPUTATIONAL BIOLOGY AND BIOMODELING
Dmitry O. Kislyuk
California High School, San Ramon, CA
This work built a model in which organisms can mutate and pass on genetic material. The goal for the organism is to occupy a grid-like environment with the
most possible cells covered at the minimal length of the organism. Every organism was assigned a genetic language, which corresponded with a phenotype-togenotype transformer. These languages varied from very basic (L1's phenotype was a direct copy of its genotype) to much more complex. <br><br>The work
demonstrated that the genotypic response to environmental change was much faster when expression control was greater (through a more complex
transformer). We found this by observing that species with more complex expression control were able to adapt much quicker to an environment, because the
organisms of that species required fewer mutations and had a higher fitness. This result highlights the complexity of evolution and ways in which organisms can
harness gene regulation and other types of expression control to adapt to their environments while maintaining the integrity of their genotype. In terms of
genetic programming, we concluded that it is more efficient to look for an algorithm which attempts to solve the problem, as opposed to seeking a direct
solution to the problem. Therefore, the most successful language was consistently L5, because of its ability to utilize expression profiles instead of relying solely
on mutations, as L1 had to. Finally, horizontal gene transfer was considered as a possible compensation for a slower evolution cycle among organisms with
less intrinsic expression control.
Awards won at the 2008 ISEF
Second Award $500 - Association for the Advancement of Artificial Intelligence
2008 - CS006
ENCRYPTING AND EMBEDDING DATA IN IMAGES
Joel Robert Knighton
Coon Rapids Senior High School, Coon Rapids, MN
The intent of this experiment was to determine if there was a relationship between choice of embedding algorithm and message visibility and which algorithm
would be most effective. <br><br>To begin this experiment a program was created that allowed users to embed encrypted messages in images. Five encoding
algorithms were implemented which changed pixel color values. A test message, “This will be our test message.”, was embedded into ten copies of an image,
five controls and one for each encoding algorithm. This was repeated a second time with a different image. All twenty of these images were presented to six
respondents and it was recorded whether these respondents believed there was data embedded in each image.<br><br>The results confirmed my belief that
the fourth encoding algorithm would be most effective. It wasn’t recognized as containing data once out of twelve responses. The results also confirmed my
belief that image content can affect visibility of embedded messages. More messages were identified in the colored image, presumably because all of the
algorithms except Encode 4 created black and white pixels.<br><br>This experiment confirmed my hypothesis that choice of algorithm will have an effect on
message visibility and that an algorithm which works on a blur effect, Encode 4, would be most effective. This is likely because it was the only algorithm which
didn’t work with set color values which allowed it to adapt to the image. In conclusion, this experiment furthered the goal of creating a convenient way to
transfer secure data.
2008 - CS047
THE BETTERNET: A BETTER INTERNET
Eugene Kuan
Chung Ling (National Type) High School, Air Itam, Pulau Pinang, MALAYSIA
As of now, the Internet is rather unregulated and spiraling out of control. The current system allows anonymity and thus encourages irresponsibility and
anarchy. Acts of libel are ubiquitous and criminal activities are pervasive. This project develops a method for universal data regulation and management,
providing the means for oversight of content on the internet. Basically, this project is a system which integrates online identities with physical ones, aiming to
consolidate national databases with Web applications. <br><br> This system retrieves each individual's identifying information from a government database
and the information is associated with that individual's online activities. For example, an Internet user registered with their government will have the option to
block emails originating from unregistered users. Therefore, spam can be curbed because it would be impossible for registered users to send email without
having their identity exposed. Another benefit of this system is that communications can rely on trust more readily, fostering trust among users. If implemented
successfully, elements of libel, offense, cybercrime and undesirable content (eg. pornography) can be inhibited. <br><br> This system can aid the growth of the
internet. It can reduce network load, since if there are incidences in which users overload the Net with unwanted content, tracking the culprit would be easy.
Users who spam and share files illegally will have their identities made known. So the Internet can grow bigger without requiring extra infrastructure to be
deployed, since less resources would be wasted on undesirable and unnecessary activities.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2008 - CS303
A MULTIFACETED APPROACH FOR THE DEVELOPMENT OF A NOVEL IP TRACEBACK METHOD FOR ATTACKER IDENTIFICATION
Arooshi Raga Kumar, Smriti Rajita Kumar,
duPont Manual High School, Louisville, KY
As the internet has become more vulnerable to cyber attacks, numerous solutions have been proposed to rectify this problem. An effective solution to this
problem is locating the source of the attacker. In order to trace an attacker it is imperative that the computer, from which the attack was launched, is identified.
A complete an accurate list of the IP addresses through which a packet travels through can suffice as a mechanism for tracing back. Although many
approaches have been proposed for effective IP traceback, they significantly lack in scalability, accuracy, and efficiency. The main goal of this project was to
develop a novel Internet Control Message Protocol (ICMP) packet based IP traceback approach that could address many of the deficiencies in existing
approaches. One of the most prominent problems is the overwriting of previously marked IP adresses. To overcome this problem the ICMP protocol was
utilized. The ICMP based approach was integrated with wireless networks and then with a wired network. As the distance between the victim and the attacker
increased the number of packets required to construct the path back to the victim did not change significantly. The effectiveness of both the wired and wireless
integration was tested for in terms of delay in executing the IP traceback. The results supported the hypothesis that the newly developed ICMP approach would
be more effective is addressing the problem of overwriting IP addresses. and that the wired integration of ICMP approach would perform better than the
wireless integration. The developed IP traceback method performed better than the exisiting methods, as supported by performance analysis.
2008 - CS021
DESIGNING A PARABOLIC REFLECTOR FOR BETTER WIRELESS RECEPTION
Billy Joe Larson
Tafuna High School, Pago Pago, AMERICAN SAMOA
This project is a result of a short term study on the effects of parabolic reflectors on a wireless networks signal. The purpose was to test if a parabolic reflector,
attached to the two antennas of a Wireless-G Broadband Router, could strengthen the overall reception of a wireless network in varying distances. To
determine the effects of a parabolic reflector on reception, a wireless router was tested at three distances without parabolic reflectors, adding one reflector, and
lastly adding two reflectors. Three trials were made for each to get an average. The wireless router, placed 3 ft away without a reflector, produced an average
of 36 megabytes per second. Adding one reflector produced 54 Mbps. Adding two reflectors the router produced 54 Mbps, no change in MBps. 54 Mbps is the
maximum number the router could reach. The router's average, placed at 6 ft without a reflector, came out 13.33 Mbps. Adding one reflector, the average was
32 MBps, and adding one more reflector came out to be 44Mbps. The router was then placed at 11 ft. The routers speed, without a reflector, had an average of
4.3 Mbps. Adding one reflector, the average increased to 15.6 Mbps. The last average of the router, attached with two reflectors, came out to be 22 Mbps. By
testing the wireless connection strength in different locations with a parabolic reflector, the overall strength of reception increased. Each time reflectors were
added to the wireless router, the signal increased at an average of 17.86 Mbps more than the reflector-less router. In conclusion, parabolic reflectors do indeed
increase the overall strength of a router's connection.
2008 - CS023
COMPUTATIONAL CANCER DETECTION DEVICE USING NEW SELF-REPLICATING ARTIFICIAL INTELLIGENCE NETWORKS
Edward Heesung Lee
Hamilton High School, Chandler, AZ
Affecting 20,000 Americans annually, Chronic Myelogenous Leukemia is a slow-growing cancer of the white blood cells that has less than a ~55% positive
detection rate in cancer victims. Formal statistical models are ineffective. <br><br> <br><br>This ANN Program on the other hand is hypothesized to be more
effective (<10% RMSE). First, the student developed a Recursion program based on Chebyshev Polynomial and Coefficients. The Special Recursive Program
is able to self-learn from data learned previous, 2nd, and 3rd previous trials. Thus, it holds three trials-worth of statistical memory.<br><br> <br><br>Therefore
using his equation, the student is able to sample 1449 patients' statistical data from the U.S. Archived Leukemia Cancer Statistics and the 682 page U.S.
Cancer Annual Report. Using the 1449 samples collected via SRS, the student tested three main variables that play a very important role in causing CM
Leukemia. <br><br> <br><br>These main variables are: Benzene [mg per m^3], Myeloblast, and Platelet Concentrations in the blood [per mm^3 of blood]. The
student statistically modeled the correlations for each chaotic variable. The residuals indicate that the accuracy increased when the number of trials increased
(learning).<br><br> <br><br>The learned predicted functions of the three tests of 1449 samples were superimposed as a probability density function. The
function indicates that a patient who has (x,y,z) points of the curve has a roughly a ~71.2% chance of acquiring or having CM Leukemia with a R^2-value of
.9052 or 91%.
Awards won at the 2008 ISEF
Second Award of $1,500 - United States Air Force
2008 - CS032
AN APPROACH TO ELABORATING CONTROL LOGICS OF ROBOTS USING SIMULATION AND MACHINE LEARNING
June Young Lee
Daegu Science High School, Daegu, SOUTH KOREA
A robot traditionally requires three parts to work seamlessly: mechanical, electrical, and software to control them. Since robots are used for various purposes,
the control functions are very complex. Specifically, the more complicated the behavior of the robot, the more it is difficult to determine the control functions. A
number of approaches have been proposed to address these kinds of problems. Among them, machine learning techniques, which are widely used in artificial
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
intelligence, are often employed. However, such techniques¡¯ heavy reliance on heuristics and trial-and-error significantly restricts the application of such
approaches on real robots. <br><br>Moreover, in the case of small robots, like the soccer robot motivating this project, storages are limited to in storing a
learning engine and the associated learned data. We address this shortcoming by providing a simulation technique for learning of robots. Our technique
consists of two steps. First, our simulation/learning engine receives structural data of a robot and hints for learning. And then behaviors of robots in a learning
process are visualized for checking them. Second, when the learning is finished, the learned data are converted to decision making codes. With a small
modification, the codes can be adopted to a robot. Consequently, we can interactively check learned behaviors of robots and learned data can be deployed
with small size.
Awards won at the 2008 ISEF
Scholarship Award of $3,000 - China Association for Science and Technology (CAST)
2008 - CS048
"TLATOANI" ALGORITHM THAT ENCRYPTS BRAILLE CODE INTO BARCODE
Karen Fabiola Mares Lemus
CBTis 139, San Francisco del Rincon, Guanajuato, MEXICO
To know characteristics of an object by blind people can only be done by writing them the information on Braille code. A writing and reading non-visual method
is what uses the barcode, method used by "Tlatoani" to expand the development of information for blind people. <br><br>Designed on visual basic language,
"Tlatoani" algorithm take every character at a time from the text, identifies it's equivalent on Braille to be transformed into a binary matrix which is multiplied by
another two coefficient matrix, with a sequential order exponent, the sum of the resultant matrix elements is an integer, expressed by the equation ∑([ab]
[c2i])=n where i=0,1,2,3,4,5. This number goes concatenate to form a string of numbers which will be cut and arranged to be sent into an algorithm that encrypts
it into a barcode.<br><br>"Tlatoani" is applied in a word processor, but the text is written by a normo-visual and is encrypted and printed as a barcode. For the
reading, the blind person read with an optical reader the barcode and the "Tlatoani" algorithm recovers the original message to be read in audio form by the
computer or by the sense of tact through a mechanic device that represents the character in relief.<br><br>There are only a few non-visual languages,
therefore, "Tlatoani" widens the creation of information by the normo-visual, the beneficiaries are the blind people whom now have another method to
acknowledge more written information of their environment.<br><br>
Awards won at the 2008 ISEF
Second Award of $250 - American Intellectual Property Law Association
Honorable Mention - Instituto Tecnologico y de Estudios Superiores de Monterrey, Campus Guadalajara
2008 - CS013
POWERCIPH - DATA ENCRYPTION
Joshua David Leverette
Boyd-Buchanan School, Chattanooga, TN
Purpose: The purpose of my project was to design a simple data encryption algorithm.<br><br>Procedures: I did this project to discover if the encryption of
data could be done in a fast, efficient, and simple manner. This way encryption could be used without having to make an entire library devoted to a single
algorithm. I predicted that this would be possible, I just was not sure how it could be done. At first I did research on the internet to see if I could find some open
source algorithms. I found some and continued on to view the source code, which happened to be dozens of files long. This made me realize that the majority
of modern encryption algorithms were bloated compared to what I wanted. So I had to experiment with different ways the I could go about encrypting data. I
considered many things before realizing the potency of the Exclusive Or (XOR) function. With much trial and error I was able to implement the algorithm that I
had devised. I now had what would, under certain conditions, be said to be a lightweight encryption algorithm; except for the following facts: my encryption
algorithm is unknown to most pieces of hacking software; this enables it to be less susceptible to unwanted hacking. Then you have the fact that you can stack
passwords, this means virtually unlimited protection. Additionally, the algorithm does not know if the data has been decrypted. If the password is wrong, then
the result will be the same as the application of an additional password.<br><br>Results: I had to create this algorithm myself. Through the entire experience I
learned many things about data encryption. To test the speed of the algorithm, I created a 34.1 megabyte plain text file and timed its encryption. It encrypted at
about 5000 Kb/s. I contacted a leading scientist at a technological university, who is also the chief deputy of the county sheriffs reserve unit, volunteer, dealing
in computer forensics and IT. I asked him to break into a few sample files that I had encrypted, and he was unable to break through the encryption of
PowerCiph, and was quite impressed. I feel that I was successful at creating a secure and useful data encryption algorithm.
2008 - CS028
CONTENT-BASED IMAGE RETRIEVAL
David C. Liu
Lynbrook High School, San Jose, CA
With the astounding number of digital images available, there is an increasing need to search image collections. Many systems such as Google and Flickr use
text-based search, but the vast majority of photos (particularly family albums) have no text description available. Content-based image retrieval searches
images using visual similarity, rather than text. This research investigates methods to improve the performance of image retrieval systems.<br><br> A novel
technique is presented for automatic combination of features using machine learning to improve retrieval accuracy. Perceptual characteristics (color and texture
signatures) are extracted as a mathematical representation of images. Color signatures are extracted based on k-means clustering of Lab color space
coordinates, and texture signatures are extracted using k-means clustering of Gabor filter dictionaries of 4 scales and 6 orientations. Signature dissimilarities
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
are measured using the Earth Mover’s Distance, and integrated through normalized linear weighting. k-nearest neighbor supervised learning is used to predict
weights based on statistical characteristics of color and texture signatures: “color spread” and “texture busyness”.<br><br> Unlike other research in which
entire images are analyzed, this research indexes images by using specifically tagged regions. This eliminates irrelevant content and backgrounds and allows
users to specify what they are looking for and make more precise queries.<br><br> It was found that the learning model significantly improves retrieval
accuracy: by 9.32% (6.56 percentage points) over using color signatures alone, and 37.06% (20.81 percentage points) over texture signatures alone. This is a
statistically significant improvement (p-value < 0.001).<br><br> An extensible framework is also presented, which visualizes color and texture signatures. This
helps researchers understand the relationship between the optimal weights and signature characteristics. It also includes an intuitive user interface for tagging
regions and querying.<br><br> Content-based image retrieval is a very active research topic that has the potential to significantly change digital image
management as well as image search engines. This research contributes a new technique to improve retrieval accuracy.
Awards won at the 2008 ISEF
Second Award $500 - Association for the Advancement of Artificial Intelligence
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
Tuition Scholarship Award in the amount of $8,000 - Office of Naval Research on behalf of the United States Navy and Marine Corps
2008 - CS008
A STREAM CIPHER ALGORITHM BASED ON MAGIC CUBE
Xiao Liu
Northeast Yucai School, Shenyang, Liaoning, CHINA
Inspired by magic cube, I have designed an encryption algorithm with a typical stream cipher structure to strengthen network security. When transmitting data,
both the sender and the receiver generate numerous keys from a predefined parameter called seed. The sender encrypts each part of the message with a
unique key, and the receiver decrypts correctly because they have same keys and use them simultaneously. With a digital cube as the major part, the algorithm
generates keys by the following operations. In the beginning, fill the cube with the data in the seed and enter the main loop. <br><br>In each loop: <br>
<br>Choose each rotation based on the data in the cube; <br><br>Disarrange the cube using those rotations; <br><br>Form a new key by the data in the
disarranged cube; <br><br>Modify the cube with the seed and go to the next loop.<br><br> <br><br>The algorithm simulates the feature of magic cube that it
is easy to disarrange and hard to solve. <br><br>In each loop, it's easy to know about the rotations from the cube before disarrangement, but <br><br>hard
from the one after disarrangement. Provided some keys are intercepted, it's hard for <br><br>attackers to recover previous cubes or the seed, and even harder
to generate the next key since <br><br>the undiscovered seed is needed when modifying the cube. Related experiment data show that <br><br>the speed of
this algorithm is 0.71 times that of RC4 (5 times that of DES), and digital cube <br><br>takes up 1/4 the space of state table in RC4.
Awards won at the 2008 ISEF
Second Award of $500 - IEEE Computer Society
2008 - CS035
ELECTRONIC WI-FI SEEKING AND POSITIONING SYSTEM
Andrew David Loss
Sarasota High School, Sarasota, FL
Currently available products for wardriving (scanning for wireless networks while using a form of transportation) are only able to log the GPS coordinate location
of where the signal was received as opposed to the location of the Access Point (AP) itself. I have constructed a device using four signal reflectors and four WiFi receivers. This device is designed to receive signals and determine Access Point locations by analyzing collected data. <br><br>The signal strength received
from multiple directional antennas at a single point provides enough information to determine the angle to the AP. Using these angles, along with at least two
points of the known coordinate location of the signal, one can determine the coordinate location of the AP. <br><br>In the course of experimentation, data was
collected at known locations of Wi-Fi networks. It was then stored in a database and sorted based on location, the unique ID of the AP, and corresponding
signal strength from each antenna. After sorting, the data was then run though an algorithm to determine the angle to the AP from the corresponding locations.
Once the angles from several locations were calculated, the coordinate location of the AP was then determined by using another mathematical formula.<br>
<br>Based on experimental use, this device is capable of providing the information needed to determine the angle to the AP and its location. This could be used
for intelligence, coverage analysis, security monitoring, and market analysis.
2008 - CS003
STEGACRYPT: A MODULAR STEGANOGRAPHY FRAMEWORK
Martin Christoph Maas
Georg Cantor Gymnasium, Halle (Saale), GERMANY
When it comes to privacy, most people rely on cryptography to protect their communication. But since transmission of encrypted data can easily be regulated
(e.g. by oppressive governments), it is sometimes necessary to hide the fact that there is secret communication at all. A way to achieve this is to hide
information within other data like pictures or audio files. This technique is called steganography.<br><br>There already exists a great number of sophisticated
steganography algorithms and programs but they are rarely being used in pracital environments. This is mostly due to some limitations of these products: most
of them are very difficult to use and only support hiding some data in a single carrier file. There is no product that fits the needs of the average user: being
flexible and easy-to-use at the same time.<br><br>I created a software to close this gap. It is called Stegacrypt and combines steganography and traditional
cryptography to achieve a high level of privacy. By implementing an own filesystem it can hide files as well as entire folder structures. Data is automatically
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
spread across multiple carrier files which can be of different types. Besides that Stegacrypt offers an easy-to-use GUI. It also provides a powerful and
extensible framework that enables developers to add new algorithms to the program and to use steganography features in their own software.<br>
<br>Considering all of these points, Stegacrypt could potentially make steganography available for everyone and provide a standard for the implementation of
steganographic algorithms.
Awards won at the 2008 ISEF
Third Award of $300 - Association for Computing Machinery
Third Award of $350 - IEEE Computer Society
2008 - CS310
DEVELOPING REAL-TIME VISUALIZATION SYSTEM OF PHYSICS MODEL
Leonid Andreevich Mashinskiy, Maxim Gennadievich Gridnev, Andrey Anatolievich Churinov
Physical and Mathematical Lyceum #30, Saint-Petersburg, RUSSIA
al of our work is to develop real-time three-dimensional scenes with high-realistic physics. The major part of the project is devoted to correct and quick visua
lization of the rigid body motion and to making special effects based on pixel and vertex shaders. The rendering process is regulated by a comfortable contr
ol system of input devices. Rendering of the polygonal objects is optimized with the help of strict typization. Our project is developed in C++ language and consi
sts of several independent interacting modules. This is why we could easily expand and modify our project.<br><br>The rendering system is based on inter
action of the high-level interface developed by us. The base unit of the geometric object representation is primitive, which contains not only geometric infor
mation, but also some surface properties (like a color of every vertex and up to four different textures). Using of templates allows us to set different vertex forma
ts for different primitives. Then we use two different shading systems in our project: the static system – for immovable objects and the dynamic system, which
was developed by using of vertex shaders. As dynamic shading we can activate different shader effects in every moment of scene rendering.<br><br>Th
e animation system is a pipe-line of special units having particular priorities and response actions for every frame. These actions are synchronized by time by a
high-accuracy timer.<br><br>The rigid body physics is based on Newton’s mechanics. For the correct rigid body interaction and applying impulse in the contact
point there is a necessity to solve the collision detection problem. However the exact collision detection for complex polygonal objects takes too much time.
Thus, for all objects we find a list of bounding objects. These bounding objects might be a sphere or an oriented box. Also, for optimization purposes, we find
whether objects can have intersection using easy test routines.<br><br>This project was developed for quick visualization of simple physic sample and can be
used for game applications. The module programming method and our animation system makes it easy to divide various problems between several members
of the program team.
Awards won at the 2008 ISEF
Fourth Award of $200 - Association for Computing Machinery
First Award of $3,000 - Team Projects - Presented by Science News
2008 - CS029
MULTI-TOUCH @ HOME
Bridger James Maxwell
Utah County Academy of Sciences, Orem, UT
Multi-touch is a novel way of interacting with a computer interface that allows multiple simultaneous contacts to be registered by the computer. This allows new
human interface metaphors to be used, and allows for a better translation from the physical world to the digital world. A multi-touch display is not commercially
available to consumers at this time. My project was to make a multi-touch system which is that is both economically feasible and useful for home or small
business by constructing multi-touch hardware for a reasonable budget and create software that is multi-touch enabled.<br><br>To make my multi-touch
hardware, I used a technique called frustrated total internal reflection and an infrared camera to detect where a finger contacted the display. I used a liquid
crystal display panel for the display on my multi-touch surface.<br><br>As for software, I used an open source blob detection application to detect blobs of light
from the infrared camera. I then wrote a framework in Objective-C to receive the touch data and make it easily available to other Objective-C based
applications. For the user interface, I integrated multi-touch with Core Animation, a data visualization API, to recognize and respond to basic multi-touch actions
such as scaling, rotation, or translation. <br><br>To demonstrate this technology, I made an application which searched a database and displayed
corresponding images. These images could then be manipulated through the multi-touch interface.
Awards won at the 2008 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2008 - CS042
QTSP AND P VS. NP: AN INVESTIGATION OF QUANTUM COMPUTING AND THE TRAVELING SALESMAN PROBLEM
Charles Robert Meyers
Little Rock Central High School, Little Rock, AR
The engineering goal of the experiment was to develop a quantum algorithm for solving Traveling Salesman Problem. If possible, this should be solved in
polynomial time on a non-deterministic quantum Turing machine. The project used cost/value spreads, a quantum TSP algorithm, and a low-level quantum
computer simulation. The results show that TSP, a BQP problem and NP-Complete Problem, is solvable in a P space with a non-deterministic Turing machine
and that the BQP class does include at least one NP-complete problem. The results show that the algorithm is optimal. There are many implications for the
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
project, including large-scale trade, computational complexity theory, quantum computation, and util-maximization.
2008 - CS002
AI SMARTVISION TOOLKIT
Ondrej Miksik
Gymnazium Kromeriz, Kromeriz, CZECH REPUBLIC
Computer vision and artificial intelligence join modern mathematics and advanced computer technologies. Nowadays, thanks to the rising hardware power,
digital image processing (DIP) becomes useful in many practical applications, such as industrial automation, security technologies, objects tracking, etc… I
have been interested in pattern recognition for autonomous robots driving; next challenge was the mark alignment searching for the live-science instruments
calibration, where the standard laser beam interferometer could not be used.<br><br>The input image obtained e.g. from camera usually needs preprocessing,
that means operations like histogram modification, noise reduction or morphology. Then we can apply sophisticated edge detection filters and tresholding. The
next step is segmentation to differentiate the object of interest and the background. Labeling is essential for object classification to separate classes. Selected
object properties finally serve for the practical applications. <br><br>I made large effort to create original program SmartVision, which represents an efficient
DIP toolkit including the integrated user environment and method library. Xml configuration files specify the operating sequence of the recognition process to
customize the solution of the various classes of problems. Open architecture enables implementation of further extensions and new algorithms implementation
using the plug-in modules. <br><br>Project SmartVision provides a comfortable access to the DIP methods for practical usage for wide scientific community.
Project was successfully tested for both dedicated tasks. The road boundary was described by its coordinates for autonomous robot driving. A new nonstandard method of the live-science instruments calibration was developed and successfully tested.
2008 - CS308
THE DESIGN AND IMPLEMENTATION OF PRACTICAL ALGORITHMS FOR AUTOMATIC COMPOSITE PHOTOGRAPHS
Ken Miura, Mami Inoue, Mayu Suzuki
Shizuoka-Prefectural Hamamatsu-Kita High School, Sizuoka, JAPAN
The purpose of this research is to design and implement practical algorithms to automatically create composite photographs without special facilities or
equipment, or using manual processes.<br><br>Two digital pictures must be taken. The first picture is taken with an open aperture and a fast shutter speed,
resulting in a sharp, clearly focused target object with the background out of focus. The second is a picture of a background into which the target object will be
inserted.<br><br>Manipulating each pixel allows us to extract the desired image from the first picture and paste it over the second picture, creating a composite
photograph. Algorithms: (1) Identify the outline of the target object of the first picture by detecting certain characteristic color properties of the objects in focus,
(2) Thicken the identified outline, (3) Extract the object surrounded by the outline, and (4) Paste the extracted object onto the background to complete the
composite photograph.<br><br>This method is faster than using manual production processes, and maintains a high photographic quality. In addition, it
eliminates the need for special facilities or equipment and avoids the blue-blurred edge effect of Chroma key, the most commonly used “blue screen”
technology.<br><br>The practical algorithms developed in this project successfully demonstrate three primary benefits over existing composition
methodologies – eliminating the need for special facilities or equipment, eliminating manual operations, and reducing the time all while maintaining an
equivalent image quality. This concept can further be applied and implemented in hardware devices such as cellular-phones, PDAs and digital cameras.
Awards won at the 2008 ISEF
Second Award $500 - Association for the Advancement of Artificial Intelligence
Fourth Award of $200 - Association for Computing Machinery
Fourth Award of $500 - Team Projects - Presented by Science News
2008 - CS043
A NEW MODEL OF SEE-THROUGH VISION: IMAGE RECONSTRUCTION IN SCATTERING MEDIA FROM QUANTUM EXIT PROBABILITIES FOR
AERIAL, TERRESTRIAL, AND UNDERWATER ROBOT INSPECTION
Lucia Mocz
Mililani High School, Mililani, HI
This project considers the problem of image reconstruction using radiative transfer theory. The theory is applied to color, contrast, and scene point recovery for
aerial, terrestrial, and underwater images taken in poor visibility conditions. For remote sensing and robotic applications the quality of images is crucial. Robotic
vision systems must include real-time computational solutions that enable them to operate in the presence of turbidity such as haze, fog, dust, and smoke. The
new, innovative algorithm presented here uses the probability of quantum exit from the medium to recover color and enhance low-contrast regions of an image.
The algorithm, called the QXP method, finds this probability from the pixel distribution and computes the intensities of the emergent light. The method then
solves, by means of integration, the problem of radiation connected with recovering the scene from the emerging light and allows seeing through moderately to
highly scattering medium. The result is a clearer image that shows extra detail in poor visibility. The method was evaluated by introducing a new quality
measure based on Kullback-Leibler divergence between the original and reconstructed pixel distributions. Experimental divergences obtained with 500 images
demonstrate the effectiveness of the method which appears computationally simpler than alternative approaches.
Awards won at the 2008 ISEF
Fourth Award of $200 - Association for Computing Machinery
Scholarship Award of $5,000 and 10-week summer research experience. Total value is $10,000 - Department of Homeland Security, University Programs
Office
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
Tuition Scholarship Award of $8,000 for original research in an important Naval-relevant scientific area - Office of Naval Research on behalf of the United
States Navy and Marine Corps
2008 - CS302
VIRTUAL LIBRARIAN
Zdenek Musil, Krejci Pavel,
High School of Applied Cybernetics, Hradec Kralove, CZECH REPUBLIC
Virtal Librarian is an application created for copyright protection throught digital technology. The project is intended for copyright protection of rented electronic
books and multimedia files in modern libraries.<br><br>Our solution is based on two applications where the first represents the role of a server providing
authorization of users, rented multimedia and provides files.The second application is a client providing the access verification for playback and secured subject
presentation.<br><br>The Security of this solution is mainly based on using RSA file encryption, which is currently considered as one of the most secure. The
Comunication as well as the files are secured by this method. Files can be distributed over the internet or on any data medium (DVD, CD, etc.)<br><br>Our
main goal is to enable the legal rental of multimedia files to your home and also to prevent copying by the end user. The next advantage of our solution is the
abitity to limit the period of the user's access to the file. After this period it will not be possible to play the file.
2008 - CS041
V.E.E.M.P. XXI
Sabrina Yanel Nicolas
Escuela de Educacion Tecnica NO1 Raul Scalabrini Ortiz, Santa Teresita, Buenos Aires, ARGENTINA
This project constitutes an alternative use for mobile phones: it teaches vial education to the user through simple games.<br><br>Over ninety percent of
teenagers own a mobile phone and use its games daily; therefore, an application for these devices became the perfect medium to get their attention. Provided
that nearly twenty people die each day in Argentina due to road accidents, vial education was the theme chosen for this project.<br><br>In the beginning, the
project evolved slowly, because none of the tools available in school were appropriate to develop such a product; after research, a J2ME development
environment was chosen. A flowchart was initially sketched, and the whole application was modeled on a data flow diagram, constructed using Yourdon’s
events partitioning approach.<br><br>While encoding the application, several tests were run and a number of faults appeared on runtime. These problems
were analyzed, and modifications to diagrams and code were done.<br><br>The present version of the product contains a Flash presentation enhanced for
mobile devices, and three applets written in Java. The user can select between a multiple-choice test, a memory game, and a traffic signs game. A WML/WAP
website was also designed so people can easily get the game world widely.<br><br>The software was tested among teenagers, and not only did the product
run on different devices, but it also succeeded as an excellent alternative learning tool. Nevertheless, some average phones didn’t meet memory requirements
to run the applications at a hundred percent.
2008 - CS009
IS SAFE GOOD ENOUGH? THE VALUE OF ADDED COMPLEXITY IN PASSWORD SECURITY
Marie Nielsen
Pacific Collegiate School, Santa Cruz, CA
rds secure access to everyday email, bank accounts, ATM machines, and other private information. The objective of this project is to predict how easily passw
ords can be decoded using brute force password decoders. To do this, password lists were created for multiple character ranges and lengths, encoded as Un
ix passwords and then decoded. The predicted decoding times were compared with experimental results.<br><br> The predicted password decoding times
were calculated using Microsoft Excel. Calculation inputs were the number of guesses per second, number of bytes in passwords, character types used in
passwords. Calculations determined the number of possible character combinations. From that a predicted time to decode each tested character set at multiple
lengths was determined. Only tests of less than 24 hours were attempted. In the experiment, a personal computer, password lists, and opensource software
was used. For each experiment 20 random passwords were encrypted. The time to decode 20 passwords using exhaustive guessing was recorded. Statistical
analysis compared predicted versus actual results.<br><br>The possible combinations as well as amount of time predicted to decode passwords exponentially
increases as the length and character choices in a password increased. A high correlation was shown to exist between the predicted and actual time measured
it takes to decode the passwords. The exponential relationship between complexity and time to decode can be extrapolated to determine how large a random
password must be to be safe. The passwords that people choose are less random, and may require even greater length to be safe.
2008 - CS046
SIGN LANGUAGE SYSTEM (SLS)
Yuan Ning Ooi
MARA Junior Science College, Kulim, MALAYSIA
SLS is a system developed to help the deaf mute community. This system helps the deaf & mute to learn in a better way and help them to communicate with
the public. The only way to communicate with the deaf & mute is by using sign language. The question now is how many of us normal people in society knows
how to use sign language? Learning also becomes difficult for these deaf & mute because they cannot use the sign language at home due to their parents’ not
knowing how to use proper sign language as well. Due to these drawbacks, most of them just complete their studies until grade 6 and refused to continue to a
higher level since learning at a higher level using sign language is quite tough. To overcome these problems, I have taken the initiative to develop a two-way
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
communication between the deaf & mute and the public and also a learning module for sign language. The learning module helps parents to learn the proper
sign language as well as help teachers to teach these special children effectively in school.<br><br> This system was created using Microsoft Visual Basic.net,
Adobe Photoshop CS2, Adobe Premier and also Microsoft Access. There are two main components in this system. The first part is the learning module which
was developed to aid the learning processes of the deaf & mute. Our main target users for this module are students and teachers in special education schools.
The second component in this system acts as a communicator between the deaf mute and the public. The communicator will help the deaf & mute who uses
sign language to communicate with the public, who in turn, do not know how to use sign language. By using the communicator, communication between the
deaf & mute and the public will not be impossible any more.<br><br> I am sure this system will be able to help the deaf & mute in their learning processes as
well as help to reduce the communication barrier between the deaf & mute and the public.
2008 - CS305
SIMULATION OF TRAFFIC PROBLEMS
Michele Pisaroni, Karaj Bernard, Gatti Marta
Liceo Scientifico Statale "Gaspare Aselli", Cremona, Cremona, ITALY
Our project starts from the idea to realize a model that can represent a real situation. So we decided to realize a simulator and we choose to investigate the
road traffic.<br><br>Our simulator is realized with the spreadsheet: the cars are represented as points, they move in a Cartesian plane and have different
variables. <br><br>We observed lots of different programs that simulate the road traffic: they are very expensive and not so easy, so we decide to make
something similar with a simple and cheap program and with our elementary knowledge about mathematics, physics and computer science.<br><br>In our
method we study the traffic in different situation such as a dual carriageway, streets where it is possible to overtake, streets with a traffic lights… and, than, we
link together the different results and after modifications and reworks, we can approach the complexity of a real situation. In fact in a second time we introduce
in our model the geometry of a real roundabout of our city: Cremona. <br><br>This procedure allows a quick resolution of the eventual errors and to observe
directly the evolution of our simulator in the different cases. Moreover the modular structure of our work makes possible a future evolution of the simulator. In
fact we might insert and study lots of new situations using the knowledge acquired for the moment about an instrument with great potential: the spreadsheet.
Awards won at the 2008 ISEF
Fourth Award of $500 - Team Projects - Presented by Science News
2008 - CS030
VIRTUAL TOUR GUIDE: AN APPLICATION OF THE SCALE INVARIANT FEATURE TRANSFORM TO IMAGE-BASED LANDMARK IDENTIFICATION
Mark Jacob Pritt
Homeschool, Walkersville, MD
This project presents a system for identifying a person’s location in a city by matching a photograph taken with a cell phone against a database of images of
buildings and landmarks. Associated with each image in the database is text information, such as street address or name of landmark. The photograph is sent
to a web server, where it is matched against each image in the database to find the building that most closely matches. Information about the building is
returned to the person’s cell phone.<br><br> <br><br>The image-matching is performed by the Scale-Invariant Feature Transform (SIFT) algorithm
implemented in Java. Each building in the database requires multiple images taken under various conditions. Controlled experiments were conducted on
images collected of a model house, as well as actual buildings in Frederick, Maryland, to determine the optimum set of images for each building in the
database. These experiments showed that the system achieves optimum speed and reliability when each building in the database is represented by six images
from various angles. A system test of 180 database images and 52 cell phone photographs demonstrated that the system correctly identifies buildings 94% of
the time, which makes it suitable for actual use by a tourist. The system currently requires two minutes to process a photograph. Future work would implement
this system on a GPU (graphics card), which would reduce the runtime to a few seconds.
Awards won at the 2008 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2008 - CS014
DISTRIBUTED INTELLIGENCE
Dorian Naim Rahamim
Orange High School, Pepper Pike, OH
Distributed Intelligence involves the concept that intelligence distributed across many individual agents can organize itself into a new intelligence beyond that
which each agent possesses. Among the core tenets of Distributed Intelligence, emergent properties help to explain many of the phenomena that occur in
nature. Among the most apparent emergent behaviors are those of communal insects, ants in particular. A colony of ants possesses the uncanny ability to
identify, as a whole, the shortest route to a food source and ignore the longer routes to other food sources.<br><br>This project focuses on the reproduction of
this phenomenon in colonies of simulated ants in a 2-dimensonal plane. Each simulated ant possesses pertinent intelligence similar to that of a real ant and
function based singular decisions initiated manually or by a timer. The environment implementing a recursive coordinate system, that allows for greater speed
and functionality in terms of the interactions between ant and environment. The recursive GUI interface also made use of the coordinate system to condense
the large coordinate grid. The pheromone simulations utilize an exponential duplication algorithm that mimics real-world pheromone quantities and also
combines with the recursive coordinate system for quick, efficient detection and reaction.
Awards won at the 2008 ISEF
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Second Award $500 - Association for the Advancement of Artificial Intelligence
2008 - CS304
BRIDGING THE GAP BETWEEN BIOLOGY AND COMPUTER SCIENCE: A GRAPHICAL IMPLEMENTATION OF A NOVEL BIOINFORMATICS
METHODOLOGY TO IDENTIFY MICROORGANISMS WITH DEFINED CATABOLIC CHARACTERISTICS
Vijay Rajaram, Dave Grundfest,
Pulaski Academy, Little Rock, AR
This project involved the creation of bioinformatics methodology with the intent of making a computational tool for identifying catabolic characteristics using
genetic expression data. This tool is accessible to researchers without background in computer science. Given gene sets for characteristics A and B, the
algorithm retrieves sequences for individual genes from a national genetic database. Next, the algorithm searches the genetic sequences against another
sequence database using a BLAST sequence similarity analysis to find organisms that matched certain genetic parameters. A graphical implementation of the
algorithm was made, thus making it an easily accessible means for biological researchers in developing preliminary bacterial lists for further genetic
modification. The algorithm was applied to two case studies--oceanic bioremediation and ethanol production—-for which it proved an efficient and accurate way
of finding suitable candidates. In the first study, bacteria were identified that have potential for hydrocarbon catabolism (“oil eating”) and not for quorum sensing,
which inhibits catabolic processes. In the second study, microorganisms were identified that have potential for conversion of cellular biomass, as well as
ethanol tolerance; this combination could lead to higher ethanol production yields. In both cases, the algorithm was successful and returned relevant bacteria,
which can serve as the starting point for the purpose of finding more effective organisms to degrade hydrocarbons or convert cellular biomass. Also, the
methodology has ubiquitous application in that can serve researchers as a tool for identifying the relationships between any genetic systems while being both
resource and cost-effective.
2008 - CS036
A SIMPLICIAL HOMOLOGY-BASED HOLE DETECTION ALGORITHM FOR WIRELESS SENSOR NETWORKS
Vinayak Ramesh
Oak Ridge High School, El Dorado Hills, CA
The purpose of this research is to solve the problem of detecting communication holes in wireless sensor networks (WSNs). Communication holes significantly
inhibit the efficiency of wireless sensor networks. This paper presents a practical and efficient algorithm for detecting the existence, number, and locations of
holes in WSNs. The algorithm assumes no knowledge of factors such as geometry, distances, and physical mote location as it is based on concepts from the
field of algebraic topology; more specifically, simplicial homology. Application of homology to analyze WSN topology is an emerging area of research. This
algorithm takes into account the minimal computational resources and capabilities of motes in wireless sensor networks. This paper shows that only the “tableof-neighboring- motes”, a standard data structure in WSNs, is needed to detect holes. The methods presented in this paper require no additional programming
of standard motes available in the market today. In addition, this algorithm can be applied to other types of ad-hoc wireless networks such as cell phone
networks and internet networks. In the future, networks will consist of thousands of nodes; the nature of this algorithm allows it to detect holes even in very
large networks.
Awards won at the 2008 ISEF
DO NOT ANNOUNCE DO NOT PRINT IN PRESS RELEASE - Alternate to summer internship - Agilent Technologies
Second Award of $500 U.S. Savings Bond - Ashtavadhani Vidwan Ambati Subbaraya Chetty (AVASC) Foundation
Second Award of $1,500 - Computer Science - Presented by Intel Foundation
Award of three $1,000 U.S. Savings Bonds, a certificate of achievement and a gold medallion. - United States Army
2008 - CS301
INWO: EVOLUTIONARY HIERARCHICAL TEXTUAL INFORMATION MAPPING
Brandon Lee Reavis, Brian Christopher Reavis,
Cody High School, Cody, WY
Having software take a completely-out-of-context chunk of information and process it to be able to give the meaningful information back to the user is a grand
challenge in computer science. The project’s focus is to alleviate that problem in the textual realm. <br><br> With relatively-simple rules using word and phrase
proximity as the primary factor, our algorithm parses textual streams for relationships between words and stores them in a MySQL database. This information
forms a simple, yet expansive, tree of linked content. One key advantage of the algorithm is that it is language-independent; meaning that the algorithm can
process text from any language without modification. Also, the rules of the algorithm are loose: formal rules of English aren’t programmed into the algorithm.
This makes it optimal for speech-recognition applications—punctuation and an occasional missed word will not significantly harm processing. As it processes
input data, it abstracts greater relationships. An initial test using a corpus consisting of 134 Wikipedia articles yielded 245,767 valid relationships that connect
one noun to another with a score.<br><br> The applications of such an algorithm are immense: internet searching that actually accounts for the searcher’s
context, mobile devices that can learn from their environment, and other systems that can effectively use language.
2008 - CS309
HEUTISTICS FOR NP: COMPLETE PROBLEMS RESOLUTION IN THE GAMES' DOMAIN
Carla Rezende Barbosa Bonin, Gustavo Montes Novaes, Mauro Lucio Ruy de Almeida Filho
Centro Federal Tecnologica de Minas Gerais, Leopoldina, Minas Gerais, BRASIL
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
This project is based on the achievement of a better solution for electronic reasoning games using the structure known as search tree, and has as main
objective its use in the games’ domain, comparing the performance and the accuracy of the following heuristics: Genetic Algorithm (GA), GRASP (Greedy
Rondomize Adaptative Search Procedure) and GA-GRASP (a combination of the best part of the first two heuristics to create an advanced one) to find the set
of movements that solves a reasoning game (puzzle). The main contribution of this research is the proposal of a new method based on a model that
incorporates the structure of Genetic Algorithm searches and the elitist generation of GRASP initial solutions. Another contribution of this project is the original
implementation of the GRASP and GA-GRASP heuristics in the games’ domain representing a possible solution to the resolution of such problems.<br>
<br>Thus, a software program called "Heuristic Combat" was created. Its main purpose was to enable the development and implementation of the Genetic
Algorithm heuristic, already used in the games’ domain, as well as the GRASP and the GA-GRASP heuristics, using the FreePascal language and the Lazarus
Program. The successful development of the Heuristic Combat software program allowed the execution of several tests, which generated significantly important
results. It was possible to conclude that both heuristics, GRASP and GA-GRASP, present much higher results than those obtained with Genetic Algorithm, both
in final speed and in solutions’ search precision. Therefore, the project effectively concludes its purpose: to propose and develop two new heuristics able to
solve NP – Complete problems in the games’ domain, with exceptionally better results than those currently obtained with Genetic Algorithms.
2008 - CS039
EASYRX, THE BEST RX FOR THE EARTH
Juan Luis Rivera-Sanchez
Francisco Morales High School, Naranjito, PUERTO RICO
The problem for this study was: is it possible to create an affordable and user friendly electronic prescribing software? The hypothesis was that it is possible to
create an affordable and user friendly electronic prescribing software using Microsoft Visual Basic 2008. An electronic prescribing software was designed using
Microsoft Visual Basic 2008 version. Once the programming was done, the program was compiled in an executable archive (.exe). The program was tested to
look for possible errors and then a second version of it was made. The program developed was an ePrescribing software called EasyRx. EasyRx offers the
doctor the opportunity to make the prescription and to send it to the pharmacy via e-mail. He also has the option to make the prescription in the traditional
method, by printing it. The advantage of sending it via e-mail is that the patient will not have the prescription in his or her hands, avoiding the possibility of
loosing it. EasyRx will help those patients that do not have time to take the prescription to the pharmacy. EasyRx will help to reduce the use of paper in the
medical prescriptions. The program has a database in the SQL Server 2005 of Microsoft®. That database offers support to the program by having all the
information of the patient’s record, medicaments, doses, usage frequency, amount of the prescribed medicament and a notification if the prescription has a
refill.
2008 - CS038
MUSICAL INSTRUMENT SOUND ANALYSIS AND RECOGNITION SYSTEM
Boonyarit Somrealvongkul
Mahidol Wittayanusorn School, Phutthamonthon, Nakorn Pathom, THAILAND
Musical instrument sound analysis and recognition system has been developed to facilitate musicians in playing several MIDI instruments while playing music.
This allows people to create a fantastic live performance. Our system takes audio files as input and produces a real-time MIDI output, representing the pitch,
timing and volume of the musical notes. The MIDI output can be sent to an external synthesizer to create an accompanying sound performance or trigger light
effects to make an impressive light show. The system uses the short- time Fourier transform to represent the signal and finds musical notes by multiple
fundamental-frequency estimation and pattern-matching algorithms. A test process is automated by synthesizing audio data from MIDI files using high quality
synthesis software. Our system achieved a 47.3 % accuracy at a 0.14 real-time factor, whereas SONIC, a high accuracy neural-network based music
transcription software, achieved a 52.8 % accuracy at a 15.42 real-time factor using the same computing machine. From the experiment, our system can
produce less real-time factor with a slightly lower accuracy than the SONIC. To improve our system, some pre-processing techniques successfully used in
similar tasks will be implemented and tested. They include audio frame overlapping and windowing, and the use of power spectral density instead of pure
spectral data, which are expected to help reducing the spectral noise.
2008 - CS044
SO MANY DATA--SO FEW LABELS: THE EFFECTS OF AUTOENCODERS ON UNLABELED DATA FOR PATTERN RECOGNITION
Thomas Ben Thompson
Huron High School, Ann Arbor, MI
The field of pattern recognition would advance rapidly if it could effectively use the vast reservoirs of unlabeled data available. However, currently, most pattern
recognition techniques require labeled data, which are far less commonly available. In the research presented here, a neural-network-based technique, the
autoencoder, was tested to see, when it was used as a front-end to a pattern recognition technique, whether or not it could improve pattern recognition using
unlabeled data. Autoencoders were tested in conjunction with Multi-Layer Perceptrons (MLPs) and with Support Vector Machines (SVMs) by training them on
the MNIST dataset of handwritten numbers. In a simulation of an environment with few labeled data (50 labeled and 59,950 unlabeled cases), the autoencoder
improved the performance of both the MLPs and the SVMs on the MNIST testing set by 16 percent and 10 percent, respectively. Consequently, the
autoencoder has the potential to tap enormous reserves of unlabeled data. This dramatically expands the range of applications for pattern recognition
techniques in areas ranging from robotics and computer vision to medical diagnosis and surveillance.
Awards won at the 2008 ISEF
First Award of $1,000 - Association for the Advancement of Artificial Intelligence
Scholarship Award of $1,000 - National Collegiate Inventors and Innovators Alliance/The Lemelson Foundation
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2008 - CS034
HIGH PERFORMANCE COMPUTING IN SIMULATING EXTREME WEATHER
Benjamin E. Ujcich
Horry County Schools' Scholars Academy, Conway, SC
Tornadoes are devastating phenomena that can form at any time or location if conditions are right. At present, to forecast weather 7-10 days in advance,
computer models typically make predictions of atmospheric structures (temperature, pressure, and winds) at every 12-27 kilometers (km). However, to simulate
realistic tornadoes, computer models must resolve the atmospheric structures at a much higher spatial resolution (i.e. < 1 km). Here, we examined how
increasing spatial resolution impact computational resources in simulating tornado-like disturbances. A weather model was run on a supercomputing cluster.
Using historical meteorological data, simulations of the 20 May 1977 tornadoes over Del City (Oklahoma) were performed. <br><br><br> Quadrupling the
simulation’s spatial resolution (from 1 km to 0.25 km) resulted in more than a 20-fold increase in file size of the model output. To simulate a 2-hour storm at
0.25-km resolution, the model calculated the atmospheric structures every 0.2 seconds and produced nearly 2 gigabytes of data. In quadrupling the spatial
resolution, the predictive time step had to be significantly reduced to maintain accuracy, leading to a 300-fold increase in completion time from 1 km to 0.25 km.
However, the benefits of increased resolution were clear. The storm structure was much better defined, and regions of probable tornado genesis were better
identified. <br><br><br> The results suggest that making tornado predictions are possible; however, present computational resources continue to prevent such
predictions to be readily available for practical usage.
2008 - CS306
THE COMPUTER ANALYSIS OF DIGITAL X-RAYS TO IDENTIFY ORAL CAVITIES
Mary Rochelle Unsworth, Rebekah Unsworth,
Kingswood Academy, Sulphur, LA
Throughout history, dental treatments have been both costly and painful. While technology is continually making it easier for dentists to find and deal with a
variety of problems, it is still easy to overlook a small cavity in an x-ray. An unnoticed cavity will deteriorate and penetrate the pulp, and repairing such cavities
requires costly and painful procedures such as root canals. Early detection is critical when trying to avoid these measures. <br><br>The purpose of this project
is to develop a computer program that assists dentist by identifying oral cavities in digital x-rays. Using Visual Basic 6.0 with the THBImage Application, a
program was developed that identifies the teeth, the jaws, and potential cavities.<br><br>When the program opens, an x-ray is selected. The computer loads
and displays it. The code first finds the usable middle. Next, using the RGB values of each pixel, it locates the jaw-line, then the gums, and finally the tops of
each tooth. Beginning at the tops of the teeth, the program searches lineraly through the teeth, looking for discolored, 'cavity' pixels. At each 'cavity' pixel, the
program enters into the IsThisACavityYN function, which searches the surrounding pixels for like values, determining whether or not the pixel is part of a cavity.
If so, the pixel is demarcated in green. After this cycle has analyzed the entire x-ray, the program outlines cavity areas in blue squares. The altered image is
displayed next to the original x-ray.<br><br>This project provides a quick, easy way to identify cavities. It greatly reduces the chances of a dentist missing a
cavity in an x-ray, thus preventing unnecssary pain and expense.
Awards won at the 2008 ISEF
Team First Award of $500 for each team member - IEEE Computer Society
2008 - CS037
HAND TRANSPLANT REJECTION DETECTION: A SOFTWARE SOLUTION
Mark A. Vitale
duPont Manual High School, Louisville, KY
Hand transplants occur more frequently worldwide. An inherent risk with hand transplants, like all transplants, is rejection of the new tissue. Time is a crucial
element in hand transplant rejection. If rejection is detected early, a course of treatment can prevent the rejection from being so severe as to require amputation
or endanger the patient’s life. With this information, it is apparent that early detection of hand transplant rejection can dictate the success of the transplant as a
whole.<br><br> Rejection of the hand is indicated by swelling and inflammation of the rejected tissue, caused by the body's immune response. The skin, if
rejected, will begin to turn red. If slight changes in redness could be detected early, success rates of hand transplants could be increased.<br><br> The Hand
Project is a program designed to aid hand transplant surgeons in preventing full-scale rejection of tissue. The software detects minute changes in redness by
using an algorithm that determines a redness level based on the red, green, and blue values of each individual pixel in the image. <br><br> The program was
tested via simulated vascular obstruction. Of the 22 cases studied, approximately 95% were successfully analyzed and changes detected by the software. The
program compares redness levels in images to a sample of normal skin taken from another image. In this way, changes over time can be ascertained and a
change in the amount of inflammation, and thus the likelihood of a rejection, can be determined before changes are evident to the human eye.
2008 - CS031
PARALLEL COMPUTER PROCESSING: AN EMULATION MODEL
Ariel Yehuda Wexler
Massachusetts Academy of Math and Science, Worcester, MA
el processing provides an increase in computation speed by managing multiple processing units simultaneously. Controand coordination of the additi
onal processors (subprocessors) is a major challenge in parallel system design. In thisproject a solution to the control problem was developed by emulati
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
ng a new parallel architecture with an accompanying RISC (Reduced Instruction Set Computer) assembler instruction set. A hardwa
e feedback and control
unit (Asynchronous Listener – ASL) was implemented to provide continuous updates to the ROOT (main) processor with current subprocessor status, thus eli
minating the need for a software-intensive scheduler. One keysembler
a
instruction partitioned tasks to subprocessors, while the Asynchronous Listener
informed the ROOT of available subprocessors. The emulat
on model was tested with varying numbers of subprocessors using a cellular automata program, a
sine wave addition program, and a Lissajous pattern generator. Maximum spe
dups attained were in the range of 3 to 74, depending on the paralleliza
bility of the task. Amdahl’s law
(speedup increases at a lesser rate as the number of subprocessors increases) was upheld, consistent with prior studies of p
arallel processing systems. The ASL proposed in this study alleviated the task of pre-planning all subprocessor assignments during the programming stage and
lowered the software overhead for parallel computation. Future development of this project may include the logic gate design of the ASL and development of a
parallel compiler. This novel hardware architecture could be feasibly utilized in the development a parallel operating system.
Awards won at the 2008 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel Foundation
2008 - CS026
APPLICATION OF REAL TIME NEURAL NETWORKS FOR USE WITH DATA GLOVES
Colby Anne Wilkason
Warner Robins High School, Warner Robins, GA
Application of Real Time Neural Networks for use with Data Gloves. The purpose of this experiment was to explore applications of real time Artificial Neural
Networks (ANN) for use with Data Gloves. The Data Gloves used flex sensors to indicate finger flexing and accelerometers to indicate hand orientation. Neural
networks were used to map the finger and hand positions to various outputs including Virtual Woodwind instruments and American Sign Language. The
hypothesis was, if the scientist can use data gloves and neural networks to create a general purpose architecture, then specific applications for American Sign
Language and Virtual Woodwind can be implemented. The hypothesis was accepted. The independent variables were the finger and hand positions of the
gloves. The dependent variables were the resistance of the flex sensors, the voltage out of the accelerometers and into the data acquisition device, and the
notes or signs produced. The controlled variables were the voltage range and gain of the buffer circuit, the software algorithms, the neural network parameters,
inputs and outputs for neural networks, and the selection of the DAQ. In this experiment, the scientist manipulated flex sensors inserted into to gloves and
accelerometers on the gloves to produce hand gestures for the Virtual Woodwind and Virtual Sign Language applications. Neural networks were used to train
and play back desired notes or letters for each application.
Awards won at the 2008 ISEF
First Award of $1,000 - Association for the Advancement of Artificial Intelligence
Scholarship Award of $1,500 per year for four years - Kennesaw State University - College of Science and Mathematics
Fourth Award of $500 - Computer Science - Presented by Intel Foundation
2008 - CS033
COMPUTER ANTIMATION
Tobias Jacob Williams
Highland Secondary School, Dundas, Ontario, CANADA
Ant colony optimization is a novel algorithm discovered in 1992 that is still not commonly used. In this project, it was tested against an exhaustive search
algorithm and a greedy search algorithm to assess performance in terms of finding the shortest path through mazes in the least amount of time. <br><br> A
maze was used as a model of a distance reduction problem, often encountered in the natural world and human environment. A program was written to test the
performance of the three algorithms on a wide variety of mazes. The length of the shortest path found, and time required for each solution was recorded. The
results of the ant colony optimization algorithm were compared against the results of the greedy and exhaustive search algorithms.<br><br> It was observed
that the ant colony optimization algorithm was significantly faster than the exhaustive search algorithm yet only half as fast the greedy algorithm for almost all of
the mazes. The greedy algorithm provided poor solutions quickly while the exhaustive search provided the best solutions so slowly that results were obtainable
for only smaller maze sizes. The ant colony optimization algorithm provided reliable solutions in a reasonable amount of time. In addition, the algorithm can be
tuned to provide limitless options for optimal performance.<br><br> The ant colony optimization algorithm was a good compromise between speed and solution
quality, making it the best model for real life applications such as computer navigation for vehicles, network routing, manufacturing processes, engineering
designs, employee scheduling, and logistics.
Awards won at the 2008 ISEF
Second Award $500 - Association for the Advancement of Artificial Intelligence
2008 - CS020
DEVELOPING XUNI: A GRAPHICAL USER INTERFACE WIDGET TOOLKIT
David Christopher Williams-King
Argyll Centre, Edmonton, Alberta, CANADA
xuni, an open source Graphical User Interface (GUI) widget toolkit, was developed in C for the Simple DirectMedia Layer (SDL). xuni was designed to be as
portable and flexible as possible, and supports many features, including widget rescaling -- something no other toolkit examined can do.<br><br> xuni is a very
large project, with over 10,000 lines of source code. As designed, xuni is also very flexible, for all who might utilize it: programmers, artists, and end users. xuni
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
also has many features that fall into the other three categories: it is reasonably complete, very robust, and fairly efficient.<br><br> Some example features of
xuni include: external resource files and themes (artist flexibility); basic text editing with insertion and deletion (user flexibility); reference counting to catch
memory leaks (efficiency); and of course, rescalable widgets (user flexibility), which xuni is unique in implementing.<br><br> xuni was published online, and
feedback was received, including some bug reports. Several test programs were written that used xuni, and a game was even ported to use xuni.<br><br>
Finally, xuni was compared with other similar widget toolkits in terms of flexibility, completeness, robustness, and efficiency. xuni performed very well in the
tests. It won all of the flexibility subcategories with ease. Though somewhat weak in completeness, xuni still won half of that category. xuni won robustness
overall, and was reasonably efficient.<br><br> xuni is a competitive open source toolkit that, uniquely, provides a way for SDL applications written in C to use
widget rescaling.
Awards won at the 2008 ISEF
First Award of $1,000 - Association for Computing Machinery
2008 - CS015
WHO IS THIS? AN EAR BIOMETRICS STUDY
John D. Wu
Princeton High School, Princeton, NJ
Biometrics is a process that uses distinguishable human features to identify people. Ears are unique to every individual and don’t change shape over time. This
research is to develop and evaluate the use of different feature extraction methods for ear recognition, and comparison of extracted features. <br><br>
<br>Pictures were collected with subjects wearing a physical ear shield to cover the hair in the background. The pictures were standardized by grayscaling, and
cropped to a fixed pixel size. Then, three methods were used to extract features from the ear: Photoshop’s threshold, IDL’s built in contour tool, and the new
contour model with edge filter using overlapping 3 x 3 arrays. Cross correlation was used to compare and classify the images of extracted features returning a
numerical value between 0 and 1, with 1 being most alike. <br><br><br>In contrast to the Photoshop threshold method and the IDL contour tool, the new
automated contour model, used with an edge filter, produced ‘cleaner’ ear images with less noise and little loss of any ear features. Correlation using two
dimensional arrays, programmed in IDL, is easy and simple in comparing and classifying the images. The implementation of this new combination of hardware
and software system is effective at extracting and classifying ear features.<br><br> <br><br>The future challenges of adapting this system for ear biometrics
application include establishing a database to further test this method, and automating the ear recognition system for collecting, processing, and comparing
ears images.
2009 - CS036
PROJECT ZIER: INNOVATING CREDIT CARD SECURITY
Kunal Agarwal
Evergreen Valley High School, San Jose, CA
--- Objectives/Goals ---<br><br>Credit card purchases are fraught with vulnerabilities, and identity theft can and does result from everyday activities. Project
Zier unveils a new identity verification system.<br><br>--- Methods/Materials ---<br><br>Zier comprises of ZierStation, ZierCard, and ZierServer. First, the
customer enters a code displayed on ZierCard into ZierStation at POS. Second, ZierStation uses this number to connect to ZierServer and retrieve the
customer’s voiceprint. ZierStation records the customer’s name live and attempts to match it with voiceprint on record. Finally, if the match is made, ZierServer
sends text message to the customer which is then entered into ZierStation to complete the transaction. <br><br>Current prototype system utilizes a computer
and an iPhone (ZierCard substitute). <br><br>--- Results ---<br><br>Zier has delivered promising results. Any attempt to breach the system requires three
barriers to be broken: physical possession of ZierCard, matching voiceprint, and the customer’s cell phone. This creates a far more secure system than the
present-day solution.<br><br>--- Conclusions/Discussion ---<br><br>R&D concludes that Zier can effectively deter fraud. Hopefully, credit card companies will
consider implementing this prototype to bolster industry standards. In production, ZierCard will consist of an EPaper screen and Paper Battery. The physical
system’s processes will remain the same, but a user-friendly GUI will be programmed. Last, while Zier provides three levels of security, these levels can be
scaled down as appropriate for the transaction.<br><br>--- Summary ---<br><br>Project Zier is the basis of an innovative credit security system that will include
dynamic card number generation, speaker identification, and cellular text verification.
2009 - CS301
AN INNOVATIVE OPEN-SOURCE REPORTING SOLUTION TO COMBAT AIDS/HIV
Tej Diptesh Amin, Zueber Juma,
Lake Highland Preparatory School, Orlando, FL
Helping to fix the weak infrastructure, including data collection and reporting systems in the scaling of AIDS/HIV treatment to third world countries to assist the
PEPFAR program and the World Health Organization can be possible with the establishment of an open source data processing program. By removing the
need to send data back to the US, where it is currently being processed, and by eliminating the use of the costly SAS program, which currently generates the
PEPFAR reports, medical facilities can become independent and receive treatment with less resources and less help.<br><br>By using Pentaho Data
Integration, test data from OpenMRS was altered and put into temporary tables in MySQL. This was achieved by using Java Script commands to select the
data at which point SQL statements were used to filter the data into tables. This was repeated for separate processes in compliance with the PEPFAR report
structure. Thus many working tables were created with the necessary medical information in MySQL. This allowed for specific queries to be called within
another program, Pentaho Reporting.<br><br>In Pentaho Reporting a template was generated similar to the PEPFAR report template with data being inputted
from the tables along with an accompanying description. A partial report was able to be created with the available processes from the initial phase of the
project, which proved that a PEPFAR report could successfully be generated using completely open source software.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Awards won at the 2009 ISEF
Third Award of $1,000 - Team Projects - Presented by Science News
2009 - CS039
DETECTING CANCER: THE FRACTAL METHOD
Leah Avi Balay-Wilson
Lincoln Park High School, Chicago, IL
Purpose: To develop a fractal-based analysis tool for quantifying the breast densities associated with breast abnormalities (benign tumors and cancerous
tumors). Current methods are based on subjective review by a trained observer, a process that is prone to error. A computer based method could reduce that
error margin and make testing for breast abnormalities much cheaper and more efficient.<br><br> <br><br>Procedure: Developed and implemented a
computer program in MATLAB to determine the fractal complexity of breast tissues from mammogram images. Mammograms were analyzed based on the
region nearest the nipple (typically the densest region of the breast). The change in fractal dimension with respect to the position in the image (directional
fractal curve) was calculated and plotted. The amplitude of the directional fractal curve was used as the measure of fractal complexity. In a blinded study, the
program's results were matched against the results of professional radiologists. <br><br> <br><br>Conclusions: Automated fractal complexity correlated
strongly with mammographic density determined by radiologists. This allows previously qualitative data to be quantified, as well as automating the testing
process, decreasing cost and increasing efficiency. The difference in density between normal breasts and those with tumors was quantified.
Awards won at the 2009 ISEF
Tuition Scholarship Award in the amount of $8,000 - Office of Naval Research on behalf of the United States Navy and Marine Corps
2009 - CS033
FPFD: AN IMPLEMENTATION OF IEEE 754-2008 DECIMAL ARITHMETIC
Tavian Edwin Lyon Barnes
Queen Elizabeth High School, Calgary, Alberta, CANADA
The IEEE 754 standard for floating-point arithmetic was recently revised, and among the changes is the introduction of decimal floating-point types. Arithmetic
on these types is carried out in base ten, which provides benefits for financial applications that may be legally obligated to round correctly in decimal. The 7542008 standard specifies two encoding schemes for decimal floating-point numbers: densely-packed-decimal (DPD), and binary-integer-decimal (BID). FPFD is a
project to determine which encoding scheme allows for a faster software implementation of arithmetic operations. FPFD is written primarily in x86-64 assembly
language, and implements both the DPD and BID encodings. Analysis of performance was conducted with a novel assembly-based approach achieving timing
resolution of a single CPU cycle. Results from benchmarking FPFD indicate that the BID encoding is faster in software for all basic arithmetic operations.
Further benchmarks dispute important performance claims made by IBM regarding their decNumber implementation of the DPD encoding, as compared to
Intel's IDFPL implementation of the BID encoding.
Awards won at the 2009 ISEF
Third Award of $350 - IEEE Computer Society
2009 - CS305
MODELING OF ENERGY TRANSFER DYNAMICS IN LHCS
Roman O. Baulin, Alexey V. Eremin,
Liceum of Information Technologies #1533, Moscow, RUSSIA
The aim of this work is the development of the software for modeling the structure of natural and artificial light-harvesting complexes (LHCs) and simulation of
energy transfer in them.<br><br>LHCs are an essential part of the photosynthetic apparatus which converts the energy of light into chemical energy. They are
the supramolecular ensembles of light-absorbing molecules (chromophores), kept together either by covalent bonds or by non-covalent interactions with a
protein matrix (environment). Most of the chromophores are responsible for absorption of light and transfer of excitation energy to the special part of the
complex – reaction center, where the energy is used for activation of electron transfer.<br><br>Natural LHCs are very efficient devices. The efficiency of
artificial LHCs strongly depends on their structure – the arrangement and mutual orientation of chromophores. Optimization of the structure with respect to the
efficiency and robustness is an important step towards the development of artificial photosynthetic devices.<br><br>Energy transfer in LHCs is accompanied by
its relaxation into the matrix surrounding the chromophores. This process must be taken into account by using the quantum theory of dissipation, the Redfield
theory.<br><br>The software developed enables one to:<br><br>- design light-harvesting complexes with any spatial arrangement and orientation of
chromophores using the graphic editor;<br><br>- use various chromophores and environments in LHC design;<br><br>- calculate the probabilities of energy
transfer between the chromophores and estimate the efficiency of any given LHC;<br><br>- plot the time-dependent populations of various parts of LHC.<br>
<br>All this facilitates the search for optimal architectures of artificial LHCs.
2009 - CS047
DESIGNING AN AFFORDABLE AND ACCESSIBLE VIRTUAL REALITY SYSTEM
Scott Douglas Betz
Bellbrook High School, Bellbrook, OH
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Fully functional virtual reality systems are not realistic for practical applications such as video gaming and architectural modeling due to their astronomical
prices. A cheaper and more accessible system was designed using all consumer electronics for less than $250 without sacrificing functionality. The
mathematical algorithm and software architecture were independently derived. Virtual reality systems can incite more applications and markets when made
cheaper and more accessible. The system uses three Nintendo Wii remotes precisely tracking movements of infrared emitters placed on ipod video goggles.
Resultantly, the user can freely move within a small room and fully interact with virtual space. Future explorations include virtual reality systems using only
internal sensors.
Awards won at the 2009 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel
2009 - CS046
AUTOMATING MINESWEEPER
Laurence Paul Bordowitz
Macomb Senior High School, Macomb, IL
Minesweeper is an NP-complete problem, which means that it can be solved, but not predictably. (Non-deterministically.) This project attempts to solve
Minesweeper using a P (deterministic) algorithm in an attempt to solve the P=NP proposition. The program created in this project relies heavily on a "setsubtraction" algorithm and a binary tree algorithm. The average of algorithm iterations per run were taken. Some outliers occurred; the number of iterations
varied though an identical board size was used. Different board sizes were compared with each other; a general trend of growth was observed. The software
created in this project does not meet the requirements of a P algorithm, as the binary tree algorithm has exponential growth and its use of random clicks and
choices make it non-deterministic. However, it can be improved. The "set-subtraction" process may be key in proving whether or not P=NP. A clarification of
how Minesweeper is NP-Complete is necessary, and future research should include other NP problems, such as 1-in-3SAT or Sudoku, and solutions.
2009 - CS006
POST-PROCESSED ADAPTIVE OPTICS
James Daniel Brandenburg
Cocoa High School, Cocoa, FL
The light gathered by earth-based telescopes is distorted by earth’s atmosphere. High-end research telescopes use deformable mirrors and powerful
computers to correct wave-front distortions and produce an enhanced image. Since these systems are extremely expensive, it would be useful to develop a
method of adaptive optics to be used with conventional telescopes. The researcher hypothesized that a software and optical system can be produced to
achieve adaptive optics with a conventional telescope by collecting wave-front and image data separately, post-processing the wave-front, and then
reconstructing the image.<br><br>The three main aspects of this project were producing an optics simulation, designing optical components, and developing
software that analyzes wave-front aberrations and reconstructs images. The optics simulation was written in C++ using a ray-trace model as its foundation.
After writing the mathematics library, classes were written to model optical devices including parabolic mirrors, plane mirrors, concave and convex lenses,
beam splitters, and CCD sensors. <br><br>The researcher then studied the effects of wave-front aberrations on an image and designed an optical system that
uses micro-mirrors to extract and preserve part of the image data in a distorted wave front. The researcher then wrote software that uses the wave-front sensor
and image sensor data to measure the wave-front, determine the correct location of data within an image and reconstruct the final image.<br><br>After
analyzing images produced by the post-processed adaptive optics system, the researcher concluded that it is able to correct wave-front distortions and create
an enhanced image.
Awards won at the 2009 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel
Second Award of $1,500 - Air Force Research Laboratory on behalf of the United States Air Force
Tuition Scholarship Award of $8,000 for original research in an important Naval-relevant scientific area - Office of Naval Research on behalf of the United
States Navy and Marine Corps
2009 - CS308
THE DECOMPOSTION/ CONSTRUCTION OF COMPLEX DATA STRINGS OR SYSTEMS
Daniel Richard Brown, Jacob Beach,
Upper Sandusky High School, Upper Sandusky, OH
Most objects in life, whether atoms, thoughts, or a pile of mud are but a composition of much smaller and less complex objects. If given the components of the
whole, could not the original be recreated? Or perhaps even an object of greater functionality? This project takes a look at the chance of replication of larger
structures (such as long word strings) when the parts that comprise it (the alphabet) are known. Over 200+ word strings (independent variables) were entered
into a database, a computer program was made that created random word strings, the randomly created word strings were matched against words in the
database, and then any matches (dependent variables) were recorded. <br><br>Our hypothesis was that many of the word strings would be replicated. The
experimental results did not support our hypothesis. Instead, they revealed a high correlation of matches with smaller strings, but little to no matches with
longer strings. The experiment also revealed potential for other uses if slightly tweaked or implemented in a different coding language. Such programs are
explored through implementation of information derived from the original program and delve on areas related to the spoken language, internet passwords,
shape replication and even cake.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Awards won at the 2009 ISEF
Fourth Award of $500 - Team Projects - Presented by Science News
2009 - CS310
MODELING THE SUN-EARTH-MOON SYSTEM
Powell R Brown, Ted Benakis, John McCauley
Silver High School, Silver City, NM
At this time, most ephemeris generators available to the public require the user to have a strong understanding of astronomical coordinate systems and the
celestial sphere. In general, current ephemeris databases calculate the positions of celestial bodies in a spherical coordinate system. Most people, however,
are familiar with the Cartesian coordinate system and find spherical coordinates hard to understand. As such, the current ephemeris databases are difficult to
navigate, understand, and apply.<br><br>The goal of this project is to create a user-friendly ephemeris generator for the earth and moon with both numerical
and visual output. Another goal is to create a stable platform for further earth-moon-sun interaction investigations. To avoid the instabilities of N-Body
simulations and the problems associated with Euler’s theorem, all positional calculations will be based on Kepler’s Laws. The program is written in DarkBasic,
and has an interface layout similar to common software. At the time of this writing no ephemeris generators are know to have visual output and provide a stable
environment for further research.
2009 - CS048
SCHOOL TIMETABLE GENERATOR SOFTWARE
Maximiliano David Bustos
Escuela Privada Gabriela Mistral Jornada Simple, La Rioja, ARGENTINA
The objective of this project was to develop a software program that generates school timetables. Creating school timetables manually is a really hard work
because of the great number of variables and requirements.<br><br>The investigation began with interviews with authorities from Gabriela Mistral School who
were in charge of the school timetables every year. The information obtained was used to formalize the problem and create software test cases.<br><br>After
that, an administration system was created to organize all the information concerning this problem.<br><br>The following step was the design of an algorithm
to create timetables that fulfill all the requirements. This was a long process that included testing and correcting the algorithm in order to make it more efficient
and, at the same time, to improve the quality of the results.<br><br>Finally, the software was tested on some real cases. The timetables obtained were better
than the ones made by the school authorities. The algorithm had a good running time and it generated timetables faster than the manual method did.
2009 - CS049
DATA MINING AND VISUALIZATION: PREDICTING THE NEXT EPIDEMIC
Rowan Matthew Chakoumakos
Oak Ridge High School, Oak Ridge, TN
Emerging infectious disease outbreaks are increasing, along with the fear of pandemics. These diseases burden the global economy, but more importantly
pose a threat to the population. This research developed a system to predict in real-time the likelihood of epidemics from disease outbreak reports. This
dynamic system, powered by artificial intelligence, extracts information and gathers additional statistics to predict the likelihood of an incoming outbreak report
becoming an epidemic. The epidemic predictions are then visualized on a three-dimensional globe facilitating rapid pattern recognition. The results are
promising with accuracy as high as 80% for correctly identifying whether or not an outbreak will develop into an epidemic. The results are high enough that this
system could be used as an early warning tool. By being able to identify an epidemic before the outbreak reaches epidemic magnitude, humans can instate
preventive measures inhibiting the outbreak’s development. This system has the potential to predict the next epidemic.
Awards won at the 2009 ISEF
First Award of $3,000 - Air Force Research Laboratory on behalf of the United States Air Force
2009 - CS001
SHAPE RECOGNITION WITH NEURAL NETWORKS
Justin De Cheng
Merlo Station High School, Beaverton, OR
Shape recognition is an important research area in computer vision. There are five steps in shape recognition: image acquiring, image procession, image
segmentation, shape analysis, and shape classification. The software will input and convert the captured image to a binary image. The compactness,
eccentricity, and the moment invariants are extracted from the shapes. In order to classify the shapes, a neural network is used. The neural network is first
trained with a training image and a class file. After the training, the shape features are inputted into the neural network, and the neural network will classify them
based on shapes used in the training process.<br><br> <br><br>In order to test the software, images with three different shapes categories are used. One set
contained insects, and the other two sets contained chess pieces. For the training process, the shapes are arranged in an organized manner. After the training
process is complete, the test image mixes up the shapes and scrambles them up. If the software is able to correctly classify the shapes in the image according
to the training image during the training phase, the software is able to recognize shapes. <br><br> <br><br>There were three sets of images. After the software
has gone through these three sets, it was able to recognize a majority of the shapes. There were a few mistakes during the shape classification process,
because according to the software, it had the likeliness to fit in either class. Otherwise, the software worked.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2009 - CS021
DO YOU EAR WHA’ I EAR? II: LOWERING VOICE FREQUENCIES IN REAL TIME TO REVOLUTIONIZE HEARING ASSISTANCE TECHNOLOGY
Nicholas Mycroft Christensen
Wetumpka High School, Wetumpka, AL
After researching audiology, electrical engineering, assembly language programming, signal processing, and microprocessors, I helped design a circuit board
customized for a PIC18F8722 microprocessor, with the help of an electrical engineer, for use with my frequency-shifting algorithm, which lowers voices in real
time. The customization includes analog-to-digital and digital-to-analog converters, and operational amplifiers, as well as input and output. <br><br>I tested the
circuit board components using Microchip’s MPLAB Integrated Development Environment and a digital oscilloscope, and I debugged my program. My assembly
language program checks the RAM using walking 0’s and FFh’s; then the core algorithm omits wave cycles at intervals and expands them in real time. To work
in real time, the program runs at 10 MIPS. The clocks are synchronized to adjust the time-span of the wave as cycles are checked for DC bias, counted and
output correctly at a lower frequency. <br><br>Test subjects listened through headphones to variously pitched recorded voices speaking sets of similarsounding words with multiple voiceless phonemes. The five tests per subject ranged in real-time adjustments from normal to 20% lower. Results of ongoing
tests show that hearing-impaired individuals gained 11% average improvement in word recognition, with individual improvements up to 60%, and even those
without impairment improved 4%. The results support my hypothesis that lowering frequencies in real time works, providing a breakthrough in hearing
assistance, and the process can be handled by an inexpensive microprocessor, which could be easily incorporated into existing communication devices, such
as cell phones, walkie-talkies, and radios.
Awards won at the 2009 ISEF
Second Award of $500, in addition the student's school will be awarded $200, and the student's mentor will be awarded $100 - Acoustical Society of America
Second Award of $1,500 - Computer Science - Presented by Intel
UTC Stock with an approximate value of $2,000 - United Technologies Corporation
2009 - CS304
HEURISTIC MOVE-SCANNING ALGORITHM OVER AN ARBITRARY NUMBER OF PROJECTION NODES
AKA CHECKERS AI
Aaron Daniel Davidson, Brian Fei,
Paul Laurence Dunbar High School, Lexington, KY
We wish to determine if there exists such a point after which looking further ahead in a checkers game no longer has any significant bearing on the outcome. In
order to accomplish this, we have written a checkers program with an AI that utilizes a heuristic algorithm that projects all potential moves in a given board state
to a preset recursive level and determines which move is apparently optimal. We played two AI players against each other allowing one player to project one
move further than the other. The data showed that as the preset projection level increased, the disadvantaged AI player won with increasing frequency, though
it always lost more than it won. This suggests that our hypothesis is to some extent supported. That is, we can conclude that as the AI projection levels
increase, an AI player one level lower than another will win more often. This conclusion demonstrates the potential of heuristic algorithms in which a “good
enough” solution rather than the absolute optimal one is found. Because the gap between the two AI players decreased with increasing projection level, it can
be suggested that one can analyze a subset of solutions rather than every possible solution to reach a suitable result. This technique can increase the
efficiency and performance of future computer systems and algorithms.
2009 - CS003
COMPUTER-GENERATED TREES USING A LIGHT-AND SPACE-BASED ALGORITHM
John Wilson Dorminy
Sola Fide Homeschool, McDonough, GA
I created a new method of algorithmically generating realistic woody landscape plant architecture based on maximizing light capture in the plant canopy. I use
optimizations similar to Steiner tree approximations to produce visually plausible trees significantly faster than existing space or L-system based algorithms.
<br><br>Both this algorithm and Runions' algorithm iteratively grow branches toward attractor points randomly distributed in the crown envelope. L-system
based algorithms create self-similar trees via preset rules, requiring exponential time and creating unrealistic trees. Runions' and other previous methods
imitated hypothesized natural growth processes without consideration for runtime. My faster hypothesized growth process is less idealized yet more natural.
<br><br>I discovered these attractor-based algorithms generate Steiner tree approximations. As in other Steiner tree approximation algorithms, I use caching,
memoizing, and binning the active attractors to increase speed. My optimized algorithm creates plausible woody landscape plant structures much faster than
previous methods.<br><br>Tests showed my algorithm to be 30 times faster for a 30,000-attractor tree, while generating structures similar to those of Runions'
algorithm. My algorithm can generate a realistic detailed tree with over a million branch tips in 3 minutes on a 2.4GHz processor.<br><br>This new algorithm
for tree generation represents a important step forward in computer tree generation. My algorithm is both biologically plausible and significantly faster than
existing algorithms. For the first time, complexity analysis and optimization techniques are applied to algorithmic botany in this novel light- and space-based
generation algorithm.
2009 - CS008
SYSTEM S: DESCRIBING STATE IN FUNCTIONAL LANGUAGES
Kevin Michael Ellis
Catlin Gabel, Beaverton, OR
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
This project introduces System S, an extension to the lambda calculus for describing state in functional languages. The descriptions of the state-changes in the
formalism are represented by a type theory for System S. A type inference heuristic for this type theory and the limitations of the type system and
reconstruction algorithm are discussed. The relationship between evaluation strategy, semantics, and types are discussed in relation to state. A novel
evaluation strategy reliant upon this type theory is proposed. System S and related type theories are increasingly extended in granularity and expressiveness
throughout the project, progressing from the simple System S to the double-smeared System S, a variant specialized for mutable references. Practical
considerations, such as recursion, conditional branching, and local state are taken into account. The project concludes with a practical application of this
calculus: automatic parallelization. A working prototype of an automatically parallelizing interpreter for System S is presented. Novel optimizations for automatic
parallelization are presented, including the use of a stochastic algorithm to optimize parallelizations. We used the interpreter to parallelize several test
programs, achieving performance increases generally in the 20% to 30% range as a result of the parallelizations.
Awards won at the 2009 ISEF
All expense paid trip to tour CERN - European Organization for Neclear Research-CERN
Intel ISEF Best of Category Award of $5,000 for Top First Place Winner - Computer Science - Presented by Intel
2009 - CS024
THINKING LIKE A HUMAN: A STUDY OF THE HUMAN BRAIN AND ARTIFICIAL NEURAL NETWORKS
David Michael Findley
Westmoore High School, Oklahoma City, OK
Artificial neural networks (ANN) are a form of artificial intelligence modeled after how the brain processes information. In this biological paradigm there are
neurons (nodes) which are small processing units. Neurons are connected to other neurons and these connections can have various integrities (weights). As
signals travel through the network output is produced. It is the purpose of this experiment to see if the simplified artificial neural networks will function similarly
to the complex human brain. One hundred four (104) students were given a test form that tested how they were able to learn a difficult pattern using only
examples. A neural network was taught with the same examples, and it solved the same problems, with the intent that the humans and the neural network
might make mistakes on the same problems. After training, the ANN was not able to make a generalization about the pattern, and could not therefore correctly
answer test problems. The number of training problems was very small and the ANN overfit such that it memorized training problems, but could not make
predictions. This does not mean that an ANN learning cannot be compared to human learning, but that in this case no sufficient data was produced to make the
comparison.
2009 - CS019
WEB EMULATOR OF COMMAND PROCESSORS OF OPERATING SYSTEMS
Kirill Vyacheslavovich Gerasimov
Samara Medical-Technical Lyceum, Samara, Samara region, RUSSIA
Nowadays PC's are usualy equipped with OS's which have a GUI to let users<br><br>easily work with their data. But OS's also include command processors
capable<br><br>of running commands written in a special formalized language. These are very<br><br>effective at routine tasks such as batch renaming or
shell programming and can<br><br>seriosly speed up the work process. But because of potential danger to<br><br>the system (mistakes in command syntax
can have dramatical consequences),<br><br>difficulty of the syntax itself and existence of a wide variety of command<br><br>processors (each with it's own
syntax and being part of an OS, which often<br><br>costs money and needs time to install), it can be hard for a common user<br><br>to start using command
processors. So, the purpose of this work was to create<br><br>an emulator of command processors of different operating systems to let a user<br><br>run
basic commands safely for his own computer.<br><br>It has been decided to make the emulator as system consisting of a web<br><br>interface imitating a
typical console and syntax analyzer, which would parse<br><br>the user input and run commands on a server, in the user's personal folder.<br><br>The
project has been realized with the use of the ASP (Active Server Pages)<br><br>technology.<br><br>The latest version of the product — WECP (Web
Emulator of Command<br><br>Processors) — lets the user to run basic shell commands of Windows and<br><br>UNIX OS's, to create and fulfill tasks on
editing the file-structure. It can<br><br>be used for educational purposes at school.
2009 - CS020
ELYSIAN SHADOWS LEVEL EDITOR: AN ITERATIVE APPROACH TO SOFTWARE DEVELOPMENT
Marcel Fabry Girgis
Bob Jones High School, Madison, AL
Video games allow players to explore vast worlds and interact with diverse environments. A game itself has been programmed to allow this interaction between
player and world, but how are these worlds created in the first place? A level editor is a piece of software created to give game developers a means of
designing levels for their games. A level editor offers a graphical user interface that allows a world to be created from basic 2D computer graphics<br><br>This
project focuses on the development and engineering of a level editor to be used by a team of independent video game developers creating a game called
"Elysian Shadows." This team is composed of my brother and a few of our friends. We each have our own respective task, and it is my duty to create the level
editor. There is no existing equivalent of this software, as it is highly customized and has unique specifications that have been designed to meet not only the
requirements of the Elysian Shadows game, but also any application that can use environments constructed from 2D graphics.<br><br>To meet these
specifications, I have chosen to use the Windows GUI and a programming language called BlitzPlus. The demands for rapid development and the constant
additions/changes to my objectives have compelled me to take an iterative approach to software development. This approach is characterized by a cycle of
analysis, design, implementation, testing, releasing and assessing feedback; each release only includes an addition of a few new objectives to ensure stability
and user acceptance.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2009 - CS044
ANALYZING THE EFFECTS OF ASYMMETRIC LINKS ON WIRELESS ROUTING PROTOCOLS THROUGH COMPUTER SIMULATION
Daniel Thomas Grier
Southside High School, Greenville, SC
B.A.T.M.A.N. (Better Approach to Mobile Ad-hoc Networking) is a recent implementation of a wireless network protocol that does not rely on a centralized node
to transmit data. The algorithm functions by assuming the paths it configures are bidirectional. Therefore, in the presence of asymmetric links, which occur
when the percentage of packet loss is different for transmitting and receiving data, the algorithm optimizes transmission routes for receiving data rather than
transmitting data. This project analyzes the feasibility of using the B.A.T.M.A.N. protocol under these asymmetric conditions.<br><br> In order to test the
algorithm and its most recent devices for compensating for asymmetric links, a computer simulation was created. The environment of the nodes in the
simulated network was created with inherently noisy and asymmetric conditions. Using the total percentage of packet loss across a given path as a metric, the
paths configured by the B.A.T.M.A.N. algorithm were compared to the best possible paths configured by the environment.<br><br> The paths configured by the
B.A.T.M.A.N. algorithm were very similar to the optimal paths calculated by the environment; in 20 trials with 40 nodes, only 5 configured routes had a
transmission quality that differed from the best possible route by more than 5% (those 5 routes differed by a maximum of only 11%), indicating that the
algorithm could form near-optimal paths under asymmetric conditions.
2009 - CS025
HEAD-CONTROLLED COMPUTER INTERFACE FOR THE DISABLED
John B Hinkel, III
Hopkinton High School, Hopkinton, MA
Countless individuals have disabilities that limit the use of their hands, including quadriplegics, stroke victims, amputees, arthritis patients, or people with mild
neurodegenerative disorders. These conditions often impede the ability of individuals to control a computer mouse. Current systems designed to allow
autonomous computer control fail to address both clicking and cursor movement, thereby offering limited ability to control a computer pointer. The purpose of
this project was to design a system comprised of a unique program and headset that will enable disabled individuals to control a computer mouse through
simple head movements. <br><br> <br><br>A program was written that translates accelerometer movements from a headset into cursor movements. The
accelerometer data also activates clicking functions, such as left, right, and double clicks, as well as dragging, scrolling, and text selection. In order to
accommodate the needs of a wide range of disabilities, the program also allows the user to adjust the sensitivity of cursor movement. In order to operate the
program, a headset was designed and constructed that is portable, compact, light-weight, adjustable, and comfortable.<br><br> <br><br>The system was
evaluated by healthy and disabled individuals. More than 63% of the subjects tested indicated that the system accomplished all design criteria. Additionally, all
test subjects were able to complete a delineated series of tasks, indicating that the system’s functionality is comparable to that of a computer mouse.<br><br>
<br><br>This data suggests that this system will permit disabled individuals to more effectively control a computer pointer in the home or workplace.
Awards won at the 2009 ISEF
First Award of $1,000 - American Intellectual Property Law Association
Third Award of $1,000 - Computer Science - Presented by Intel
Third Award of $150 - Patent and Trademark Office Society
2009 - CS028
NAVIBOT: PHASE II
Matthew Joseph Hummel
Florence High School #14-1, Florence, SD
The purpose of this year's project was to test the capabilities of the new applied sensors of the Navibot's self-navigation system. The hypothesis for this project
was that the Navibot would efficiently direct itself in a given direction through obstacles without meeting any objects in it's path.<br><br>The previous materials
used to create the Navibot were BASIC Stamp II module, a Board of Education docking system for the BASIC Stamp, two plexi-glass circles of 6 1/2 inches in
diameter, various #6 nuts and bolts, a flat bottle cap for weight support, a PING ultrasonic sensor for sonar detection, a 9 volt battery, and two Continuous
Rotation Servo Futaba motors with Boe bot wheels for mobility of the Navibot.<br><br>This year, a Hitachi Compass Module programmed through the BASIC
Stamp II, was added to the robot for the direction that was needed for the robot to travel in a direction without meeting any other objects. Results showed that
the Navibot's Compass Module and PING sensor complied with the student researcher's code to direct the Navibot to a certain direction while avoiding
obstacles. Out of ten test runs, all ten of these tests were successful.<br><br>Probable applications for the Navibot in the future could be used for autonomous
vehicles, voice-controlled mobile devices, and navigation systems for the disabled.
2009 - CS306
STYLOMETRIC "FINGERPRINTING": A COMPUTERIZED APPROACH TO AUTHOR IDENTIFICATION
Bethany Lynne Johnson, Ashley Kate Vechinski,
Life Christian Academy, Harvest, AL
Qualitative stylometry (writing style analysis) has been used in authorship studies since the fifteenth century. Can quantitative, computerized analysis of style
distinguish one student author from another? The experimenters hypothesized that computerized stylometric analysis of text statistics, syntactic features, partof-speech features, closed-class word sets, and measures of vocabulary richness can differentiate between authors. <br><br> Eighteen metrics were selected
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
through background research, teacher interviews, and evaluation of student papers. De-identified student essays were obtained. Each student’s essays were
divided into two files, a training set and a testing set. The experimenters wrote two computer programs: TextAnalysis, which calculated values for each metric,
and AuthorIdentifier, which utilized techniques similar to those employed in military target identification. AuthorIdentifier determined all possible combinations of
the eighteen metrics. It then retrieved the previously-calculated values of each metric for each testing and training set. For each pairing of a metric combination
(262,143 possible) and a testing set, AuthorIdentifier determined which student’s training set was closest to the given testing set. <br><br> The most valuable
metrics were average word length, percent punctuation, standard deviation of sentence length, average number of selected punctuation marks per sentence,
average word frequency class, and average number of prepositions per sentence. AuthorIdentifier placed the correct author in the top 10.7% of rank-ordered
authors 82.7% of the time and in the top 17.9% of rank-ordered authors 89.7% of the time. Author identification through computerized stylometric analysis may
be extended to provide a superior alternative to current corpus-based plagiarism detection programs.
Awards won at the 2009 ISEF
Team First Award of $500 for each team member - IEEE Computer Society
Second Award of $1,500 - Team Projects - Presented by Science News
2009 - CS004
AN EXPLORATION OF THE EFFECTS OF VALUE FUNCTION ESTIMATION AND DATA MODELING ON THE PERFORMANCE OF A REINFORCEMENT
LEARNING ALGORITHM
Edward T. Kaszubski
Lake Highland Preparatory School, Orlando, FL
This project sought to determine the best method of value function estimation for a reinforcement learning algorithm as well as to see how replacing the default
value in a value function with an artificial neural network would affect the overall performance of the reinforcement learning algorithm. The number of iterations
performed by the algorithm until converging on a policy and the average net reward attained by algorithm when using its selected policy were recorded for three
types of value function estimation techniques. The first value function estimation technique calculated values using a discounted infinite sum of all future
rewards when starting from a given state. The second technique calculated a sum of available rewards when starting from a given state, stopping when a “loop”
was detected. The third technique calculated the maximum value attainable at a given state using a recursive summation of the value available at all child
states of the given state. The maximum value technique was found to be the most efficient of the three variations tested, consistently converging on the optimal
policy for small values of the policy's delta variable, though the second algorithm showed more potential for higher values of the polity's delta variable. The
addition of an artificial neural network to the value function resulted in a significant increase in performance for all three algorithms. Both parts of the hypothesis
were accepted. Further research potential includes implementing more advanced data modeling techniques and combining several reinforcement learning
algorithms together to accomplish more complex tasks.
2009 - CS042
APPLIED DIGITAL CEREBROVASCULAR SEGMENTATION
Vedant Subramaniam Kumar
duPont Manual Magnet High School, Louisville, KY
The cerebrovascular system facilitates blood circulation through the brain. Diagnosing atherosclerosis, vascular malformations, and other disorders which affect
this system is difficult. The restriction or loss of blood flow caused by cerebrovascular disorders can result in stroke, a leading cause of death and disability, or
life-threatening coma. The objective is to create a new diagnostic tool that will accurately visualize the cerebrovascular system, so doctors may detect and treat
malignant formations preemptively and evaluate intra-cranial conditions for operation.<br><br>Magnetic Resonance Angiography technology is non-invasive,
well-established, and provides 3-dimensional (3D) brain data. Blood vessels are classified by adaptive probability models of voxel intensities and then threaded
into contiguous regions. The probability models are assembled by a modified expectation-maximization algorithm, which relies on the Markov-Gibbs model and
on maximum likelihood estimation fitting. A visualization toolkit is used to display the purified cerebrovascular system in 3D. Three test cases were used to
measure visualization accuracy. On average, the novel algorithm results in a 0.61% error rate. These are average error rates for control algorithms; WilsonNoble (4.64%), Hu-Hoffman (18.8%), Kass-Witkin (11.9%), and Xu-Prince (5.96%). A C++ implementation of the novel approach completes segmentation within
5 seconds on a dual-core computer with one gigabyte of RAM. Minute error rates and a fast execution speed suggest that this tool can fulfill its purpose.
Awards won at the 2009 ISEF
Fourth Award of $200 - Association for Computing Machinery
Second Award of $1,500 - Computer Science - Presented by Intel
2009 - CS012
SECURE TESTING OSS
Brendan Anthony Lee
Alma High School, Alma, AR
Computer based examinations are becoming popular in schools and other organizations across the country. Recently, a local school district began using
commercial software to generate computer based exams. However, these exams have presented a new and rising problem; having proven to be easier to
cheat on and manipulate than their paper counterparts.<br><br> <br><br>Early forms of the developed program were hand-built, placing and modifying code
reflecting school requirements. This "hand-built" approach substituted for pseudocode in order to document and define what functions were necessary, and to
determine how each task should be executed. Necessary modifications were manually made to the HTML files, and the changes were noted for implementation
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
into the software.<br><br> <br><br>The program was shown to be effective at preventing cheating and unauthorized test taking by eliminating multiple score
entries per test, per student. Early versions of the software eliminated only some cheaters; however, as the program and technologies matured, methods of
cheating were eliminated resulting in a final product that is cheat-proof while offering other security options to teachers. As program development proceeded,
and automation became the standard, results became predictable. If, and when, circumventions are discovered, fixes can be implemented and distributed via
the AutoUpdate software. Automation and user-friendly GUI's created a program that can modify and deploy a test in minutes, which is more secure than the
originals the commercial software produced. These results suggest that the software has met its design goals.
Awards won at the 2009 ISEF
Genius Scholarships - Sierra Nevada College
2009 - CS309
HEURISTICAL MULTITOUCH INTERFACE DATA TRANSFER
Hyun Ki Lee, Darren Lim, Xue Lai
Chengdu International School, Chengdu, Si Chuan, CHINA
With a quickly advancing world, technology has developed to allow better interaction betwixt humans and machines. One vanguard of this technology is the
concept of multitouch. In our project, we attempted to construct an interactive multitouch computer which was economical, transportable, and energy-saving
using the concept of IMP (Illuminated Matrix Plane) and communication between the Visual Basic and Flash programming languages in a system. By
researching multitouch techniques, then amalgamating components from different multitouch set ups, we managed successfully to assemble a functional
multitouch computer, and then conducted a survey to assess the true average expected price of a multitouch computed. By comparing the cost of our
multitouch computer and the average expected price from our survey, we could check if our goal was accomplished. Next, by writing code that could connect
Visual Basic to Flash, commands that were entered into Visual Basic could be transformed into enhanced graphics with Flash. Our multitouch computer was
cheaper than the average expected price by more than a thousand dollars, had an acceptable weight so as to achieve increased portability, and consumed less
power than normal multitouch techniques. A software engine which hooked Visual Basic and Flash enabled commands in the former language to be applied in
the latter. The results showed that our goals were accomplished and thus our experiment as successful. Possible applications of our multitouch computer
include usage in restaurants, art and architecture, and computers for the visually-impaired, as it has increased precision, interaction, and automation than past
technology.
Awards won at the 2009 ISEF
First Award of $3,000 - Team Projects - Presented by Science News
2009 - CS017
PROTECTING IP ADDRESS IN P2P NETWORK THROUGH TUNNELING AMONG TRUSTING PEERS
Keun Hong Lee
Korea Science Academy, Busan, SOUTH KOREA
P2P(Peer-to-Peer) network is a popular approach to sharing files easily by direct connections between clients. However, the IP address of the client may be
revealed while uploading or downloading files in the current P2P network and can be used maliciously because every packet includes the source and
destination information. When IP address is revealed, services which are running on the client can be inferred from the list of shared files. Thus, P2P network
users are always faced with risk of hacking. To make up for these weak points in the P2P network, this research suggests a new P2P network supporting
anonymity to protect the IP address. A routing method between neighbor clients is devised to guarantee anonymity, and the methods of file partitioning and
packet multiplexing are also proposed to prevent from dropping the network speed. In the proposed P2P network, the IP address of each client is not revealed
to other untrusting clients that are not located in its neighborhood, because each client is directly connected to the trusting clients that are located in its
neighborhood. Also, a client cannot find where the shared files are. Clients are connected by indirectly when shared files are transferred from the source to
destination clients. Moreover, the proposed P2P network provides partitioned downloading for partitioning the requested file as many as a number of clients,
and multiplexed downloading for multiplexing the packet of a partitioned file to alleviate the network speed drop caused by supporting anonymity.
2009 - CS311
SORTING INTEGERS AND FLOATS: EXAMINING RUN-TIME COMPARISONS
Haoran Li, Theo Young, Jason Cathey
Mississippi School for Mathematics & Science, Columbus, MS
The primary purpose of this project was to determine which of seven algorithms is the quickest and most efficient at sorting a certain number of random
integers (whole numbers) or floats (decimal numbers). The seven individual sorting algorithms used are unique, provided that each has its individual limitations
and parameters. Through prior research, the algorithm QuickSort was thought to have been the most efficient for sorting out both integers and floats.<br>
<br>To determine the most efficient sort for each of the two categories, run-throughs were carried out in Javascript, a browser-based computing language, on
five distinct browsers: Internet Explorer 7, Mozilla Firefox 3, Safari 3, Safari 4 Beta, and Google Chrome. Moreover, all run-throughs were tested on the Linux
operating system Fedora, the Microsoft operating system Vista, and the Apple operating system Leopard. Therefore, the secondary purpose is to determine
whether browser affects sorting times; the tertiary purpose is to determine if operating system affects the sorting times.<br><br>Across the browsers, Shell and
QuickSort tied for quickest, each being more efficient for floats and integers, respectively. <br><br>The results also showed that Quick, Merge, and Shell never
took more than 16 milliseconds to complete the initial sorting. BubbleSort, however, took more than 1300 milliseconds to sort both integers and floats once the
numbers reached 800. As the results showed, BubbleSort was clearly the slowest sort overall, hundreds of percentages slower than the second overall slowest,
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
the SelectionSort.<br><br>The results proved that QuickSort was most efficient on integers and ShellSort was most efficient on floats, which denounces the
main hypothesis that QuickSort would have the quickest results for both. Additionally, the tests show that Google Chrome was the overall fastest browser and
that the speed of the web browser has no effect on the algorithm sorting time.
2009 - CS035
SEMANTIC IMAGE RETRIEVAL: LEARNING GAUSSIAN MIXTURE MODELS OF SEMANTIC CONCEPTS USING EXPECTATION-MAXIMIZATION
David C Liu
Lynbrook High School, San Jose, CA
The proliferation of digital photos has made efficient automation of image search more challenging than ever. Current image retrieval methods directly compare
images by low-level perceptual characteristics such as color and texture. However, humans understand images by describing them with concepts, such as
“building,” “sky,” and “people.” This project employs artificial intelligence to identify semantic concepts in images using probabilistic models. No manual
annotation of photos is required—images are found using content recognition.<br><br> In this research, the visual feature distribution of each concept was
modeled with a Gaussian Mixture Model learned using the Expectation-Maximization algorithm. The models captured the dominant colors and textures of the
concepts. During retrieval, the models were used to determine the probability of each concept occurring in the images, producing a probability vector called a
semantic multinomial (SMN). Images were retrieved by finding the most similar SMNs.<br><br> This research has shown that by linking images with their
underlying semantic meaning, the system understood the images and significantly improved retrieval accuracy. It was also shown that a feature set consisting
of YCbCr color values and Gabor texture filter responses had consistently better retrieval performance than Discrete Cosine Transform features.<br><br> A
novel dynamic browser was also developed for exploring image collections based on semantic similarity. It uses an animated spring graph layout algorithm to
create self-organizing clusters of semantic concepts. This provides a unique way to visualize the inherent semantic relationships in the image database, as well
as to search for images by concepts.<br><br> This technology has far reaching applications beyond organizing family albums. In the medical profession, the
ability to quickly correlate unknown MRI images with that of known medical disorders is now within reach. In the area of natural resource exploration for oil, gas,
and coal, remotely-sensed images can now be automatically related to images of known natural reserves.
Awards won at the 2009 ISEF
First Award of $1,000 - IEEE Computer Society
Fourth Award of $500 - Computer Science - Presented by Intel
Winner receives an all expense paid trip to Operation Cherry Blossom in Tokyo, Japan. Each trip winner will also receive three $1,000 U.S. Savings Bonds,
$500 from the Association of the United States Army, a gold medallion and a certificate of achievement. - United States Army
2009 - CS029
CREATING A COMPUTER SYSTEM FOR SIMULATION-BASED EDUCATION
Bridger James Maxwell
Utah County Academy of Sciences, Orem, UT
Simulations can be a powerful tool for education, as shown by the Christa McAuliffe Space Education Center (CMSEC) in Pleasant Grove, Utah. Simulations
employ a unique integration of role play, active learning, and narrative to create an environment where students can exercise problem solving, apply principles
learned in the classroom, and build teamwork while having a great time. However, building a simulator is beyond the budget and resources of most schools.
One of the most expensive, and complicated, aspects of a simulator is the computer system which the students interact with. Software that can run these
simulations is not commercially available, and creating such software can be enormously expensive.<br><br>My goal was to create a computer system that
could be used in schools wishing to create their own simulators. I both learned from, and improved, the software currently used at the CMSEC. My project had
many engineering goals in the areas of networking, extensibility, maintainability, themeability, and educational content. The finished product is a family of
applications, including a server for keeping all of the stations in a simulator synchronized, a client which bootstraps and themes a station, several content
screens, and hardware controllers for networking tactile controls and visual responses. Additionally, I created several tools to facilitate the development of
additional content which can be later added to the simulator. The new system will not only improve the CMSEC's current simulators, but can also be easily
deployed to a Macintosh lab at other facilities without a technical professional.
Awards won at the 2009 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel
2009 - CS016
IMPROVING STATISTICAL MACHINE TRANSLATION THROUGH TEMPLATE-BASED PHRASE-TABLE EXTENSIONS
Hayden Craig Metsky
Millburn High School, Millburn, NJ
Machine translation has been studied for decades in an effort to facilitate communication among the world populations that use more than 120 different
languages. One approach has been to construct phrase-pair tables by analysis of previously translated documents. This enables a translation approach based
on phrases rather than on single words. But if a given phrase is not in such a table, it must be split into subphrases until matches are found, and this often
leads to poor or awkward word order in the resulting translation. In this work, I have developed a novel round-trip substitution algorithm that makes use of an
existing phrase-pair table to extend that table and generate new phrase-pairs that pertain to a given unknown phrase. The generated target phrases are then
checked against the target language model and the proper statistical weights for the new phrase-pairs are calculated from the ones used in the generation. Those statistical weights can then be utilized in a relatively standard decoder module. This approach draws on the knowledge already encapsulated in the
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
existing phrase-pairs, as well as the target language model, and hence tends to maintain changes in word order and word usage between the two languages. I
have evaluated the results of such translation using both an automatic evaluation technique and a human evaluation of the translations, and both methods
indicate that the proposed algorithm results in improved machine translation quality.
2009 - CS045
COMPLEX EVALUATION OF DANGER AND TRANQUILITY IN URBAN SETTINGS: AN IMMUNOCOMPUTING INTELLIGENCE APPROACH
Lucia Mocz
Mililani High School, Mililani, HI
This project uses the idea of a formal protein to model danger and tranquility in urban settings. The approach is based on immunocomputing, i.e. it utilizes a
new kind of computational technology recently introduced by Tarakanov that reflects the principles of information processing executed by proteins and immune
networks. The immunocomputing paradigm allowed the author of this project to formulate and solve complex problems of hazard prediction in Mililani Town,
Hawaii, an All-American City, with the necessary mathematical rigor. The 25 hazard indicators considered were folded into 625 matrices representing the map
of the town. For supervised learning, a subset of matrices was factorized by singular value decomposition. The recognition of danger classes was performed by
finding the minimum value of binding energy (singular values) for each test (binding) matrix based on the singular vectors (formal protein-probes) of the learning
set. The areas with greatest risks and the general hazard levels in every area were revealed with resolution beyond the possibilities of traditional statistics.
Hydro-geomorphological structures, eco-biospheric elements, socio-economical factors, and quasi-natural fire threats emerged as integral hazards in the
computed map. A similar analysis was carried out for tranquility which appeared not as a diametric opposite of danger but as a distinct notion. The results may
be used by Homeland Security experts and municipalities for emergency management and planning of development. The robust and flexible algorithm
implemented in this project could be equally successful for a unification of heterogeneous data in a wide range of scientific and technical applications.
Awards won at the 2009 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel
Third Award of $1,000 - National Institute on Drug Abuse, Friends of NIDA, National Institutes of Health
2009 - CS013
AUTONOMOUS ROBOTIC VEHICLE LOCALIZATION UTILIZING A LASER TRIANGULATION ALGORITHM
Daniel Spencer Mullins
Sarasota High School, Sarasota, FL
The study of robotics is a recently developed field, and the study of autonomous (acting by independent thought) robotics is even more modern. Recently,
research has been done in the field of artificial intelligence in robots. One area of artificial intelligence currently being studied is autonomous ground,
underwater, and aerial robots. These robots could be used for various purposes, such as recovery missions in locations where humans cannot venture
because of danger or simple reconnaissance of an unknown environment. In order to complete their mission, these robots are equipped with many types of
sensors. These sensors allow the robot to learn about their environment and make decisions based on this information. Laser range finders, one type of
sensor, are typically incorporated into a more complex navigational system. In this research, a laser range finder was mounted on a robot. The laser range
finder sent data to the robot's computer with the distance and bearing of three pylons placed in the robot's environment. An algorithm was developed to use this
data to calculate the robot's position. This data was used by the computer to plot the intersection of three circles by utilizing the developed algorithm. The
intersection was the experimental location of the robot's center. Five trials were conducted and all experimental locations were within the area of the robot's
footprint. This proved my hypotheses correct, a algorithm could be created that used the properties of circles to allow a mobile robot to determine its current
location.
2009 - CS002
A QUANTUM IMPROVEMENT IN HUMAN-COMPUTER INTERFACES FOR INFORMATION COMMUNICATION USING MULTIMODAL DATA
REPRESENTATION
Neel Sanjay Patel
Oviedo High School, Oviedo, FL
Human-computer interfaces rely heavily on visual display of information. As amount of information increases, it becomes critical to augment visual
communication with audio. Through a process called sonification, data can be communicated using the three dimensions of sound - pitch, intensity, and tempo.
<br><br><br>The advantages of sonification include the simplification of complex data as well as not requiring users to be directly in front of a screen. This
project was designed to identify the bounds within which sonification is a possible substitute for visualization. It was hypothesized that the key dimensions of
visual display could be mapped onto similar dimensions of audio without loss of information.<br><br><br>A sonificability score, based on how well/easily the
underlying information could be sonified, was calculated for 150 graphs collected from popular print media selected to cover various levels of education. A
program was then written to create audio representations of data for further review and analysis.<br><br><br>It was found that overall about 97% of graphs
studied are sonifiable with relative minor differences between different grade levels of reading materials. Sonification represents a significant improvement of
possible data representations (103,320) compared to visualization (65,912). The exponential impact of using a combination of sonification and visualization can
yield up to 7 billion possible data representations, a quantum leap.<br><br><br>In conclusion, the hypothesis was supported by the data. Most commonly
communicated visual information can be represented in audio form. Future extensions include investigating the limitations of human comprehension of sonified
information as compared to visual.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2009 - CS303
MODELLING PROCESS OF SHAFTS BORRING
Igor U. Pertsav, Uladzislau V. Kaptsiuk,
Secondary School #166, Minsk, BELARUS
The imprescriptible part of the boring process is problematic geophysical circumstances, lack of visual control after the process, wrecking situations produced
by deflection of a chisel from the project profile. It leads to the abnormality of the terms of creating a shaft, rise of the work and material resources. <br>
<br>The aim of the work is to help to eliminate difficulties with the reduction of the costs.<br><br>While working with the project there were researched: the
depending of the used chisels on the layer structure; the reasons and methods of elimination of the profile’s hogging in the boring process using computer
modeling.<br><br>The task solution is the mathematic model of presentation of the borehole as an interpolate Newton’s polynomial of n-degree. Every position
of the chisel diving depth is characterized be the changing of the guideline in horizontal and vertical platitudes. The derivative of projections of the trajectories
on these platitudes needed to find angles are calculated with using numeral differentiation. These projections can help to make a 3D media and visualize the
boring process. The algorithms of the solution are program-realized in the sphere of Delphi 7.0.<br><br>Using of calculate methods gave us an opportunity to
eliminate wrecking situations fast and correct and to present the process of boring.
2009 - CS027
ENERGY CONSUMPTION OPTIMIZATION
Christina Figueiredo Prudencio
Escola Americana de Campinas, Campinas, Sao Paulo, BRASIL
This project aimed to design a Software, utilizing Labview and Microsoft Visual Studio, that recorded data, and compiled graphics and tables relevant to
facilitating data analysis and comparison for optimizing energy consumption. The measures were taken using a precision powermeter, capable of measures
from miliwatts to 3000 Watts. This is transmitted to a computer using a USB port. The electrical energy consumed by households or companies is charged by
Watt/hours.<br><br>The measurements focused on two types of equipment: computers and televisions, taking into account different operation modes. All
available alternatives were analyzed: desktops, thin clients, netbooks and notebooks. The corresponding available monitors were also tested: CRTs and LCDs.
In the television category, devices tested ranged from old CRT models to newer Plasmas and LCDs.<br><br>This data allowed the identification of devices like
Desktops that have a significant consumption even when they appear to be turned off (shutdown mode). In addition, the available technologies were compared
to show the advantages and disadvantages of each and propose the best alternatives with the smallest energy expenditure.<br><br>This analysis can be
useful to schools that have many computers and monitors, as the annual energy savings can be substantial. Based on the data collected, a new plan to
configure and use computers at EAC was developed, taking into account the best use of energy and the sustainability of all information processing needs. By
replacing desktops and thin clients with netbooks in Classrooms, Library and other labs, over 2KW-hour/month (31,2%) can be saved and one might observe
performance and portability improvements.
2009 - CS005
FACE RECOGNITION: IS IT A MATCH?
Varuna Rao
Union High School, Tulsa, OK
Most facial recognition computer systems, including two-dimensional and three-dimensional systems, follow a basic algorithm. The algorithm consists of
analyzing nodal points. Nodal points, in this case, are specific pixels on the face highlighting various facial features. These points combined are called a
faceprint. Once the computer creates a faceprint for a captured face, it will try and match it to a face in the database. However, there is no guarantee that
computers will be correct.<br><br> During this research, the objective was to determine the probability that a novel GUI-based software program can match a
subject’s captured facial photograph to the same subject’s photograph in a database, and to determine a facial recognition system’s accuracy. Two images of a
voluntary sample of subjects were acquired. One set of images’ Sum Of Weighted Ratios (SOWR) value was saved to the program’s database and the second
set acted as the captured images. The SOWR values were determined with internodal distances, ratios and weighted ratios. To decrease bias, a simulation
was performed with a sample size of 10% of the population. An in-depth analysis of the average, standard deviation and matched pairs T-Test was performed
to determine the significance of the difference in SOWR values. (Ideally, the difference of the two SOWR values of each subject should equal 0). Once
statistical significance existed for a match, the expected value, or probability, of finding a match using the GUI-based software program with a margin of error
was calculated to be 45%.
2009 - CS043
NEURAL NETWORK MODELING- AN INNOVATIVE TIME AND COST EFFICIENT APPROACH FOR ANTI-CANCER DRUG DEVELOPMENT
Monica Roy Chowdhury
Blue Valley High School, Stilwell, KS
Problem: Anti-cancer drug research has been conducted extensively at tremendous cost, for over four decades, in the finest laboratories of the world, yet the
cure for cancer remains elusive. <br><br>Hypothesis: This project uses a multidisciplinary - biochemistry, mathematics, engineering, and computer science approach to develop anti-cancer drugs with reduced time and cost. This is done using an Artificial Neural Network (ANN), a mathematical model of the brain.
Analogues of the anti-cancer drug Taxol (Paclitaxel-C47H51O14) were obtained from National Cancer Institute (NCI). An optimal network is able to identify
which analogues possess anti-cancer activity greater than Taxol. <br><br>Approach/Procedure: Matlab and Neural Network toolbox are used to synthesize the
architecture of the ANN. Each network consists of “neurons” distributed over two layers (connected by synaptic weights). 45 Taxol analogues will be used to
train/test a neural network to find an optimal configuration. For each analogue, data for 22 biochemical features is provided. The network employs a back-
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
propagation (BP) learning algorithm to map the input of the network to its output for all the training patterns. In this research, the input pattern vectors are the
biochemical features and the output values are the known normalized anti-cancer activities. <br><br>Results/Conclusion: Using first four dominant principal
components and a network with two hidden layers, the network correctly identified 10 out of 11 compounds with activities higher than Taxol, reflecting over a
90% accuracy rate. This method is a novel strategy for anti-cancer drug discovery, with time and cost benefits.
Awards won at the 2009 ISEF
Second Award of $500 U.S. savings bond - Ashtavadhani Vidwan Ambati Subbaraya Chetty (AVASC) Foundation
Second Award of $2,000 - National Anti-Vivisection Society
Scholarship Award of $1,000 - National Collegiate Inventors and Innovators Alliance/The Lemelson Foundation
2009 - CS014
HUMAN VISUAL SYSTEM-BASED ADAPTIVE TONE REPRODUCTION FOR RESTORING IMPERCEPTIBLE DETAILS OF DIGITAL IMAGES
Yi-Ping Shih
Taipei Municipal First Girls' Senior High School, Taipei, Taiwan (R.O.C.), CHINESE TAIPEI
In this project, an adaptive Tone Reproduction Scheme (TRS) is proposed to restore imperceptible details of digital images. Due to the lack of sufficient
dynamic range, many details of digital images are imperceptible to human eyes. To those photos which cannot be retaken, such as aged photos and pictures
taken by surveillance cameras, the processing technique for restoring details is particularly important. In TRS, a local normalization concept was combined
with a global contrast balance process. In the local normalization process, each local contrast of an image was dynamically expanded based on the
characteristics of each local region. Adaptive parameters were set based on the idea that the enhanced contrast of noise has to be lower than human justnoticeable-difference (JND), which represents the threshold of perceptible contrast. In the global contrast balance process, the system selected pixels with
higher contrast from either the original or the normalized image as the result, and simultaneously balanced the selected pixels with adaptive parameter. An
evaluation method based on JND was designed to judge the performance of the TRS by computing the percentage of visual details of an image. The
enhanced images revealed more visible details than the results of recent studies. Furthermore, with minimal requirements of manual adjustments, this
technique has the ability of processing massive data. The contribution of the TRS is that qualities of scanned photos can also be ameliorated. Therefore, this
technique has the merit of improving qualities of digitized aged photos and other types of digital images.
Awards won at the 2009 ISEF
Fourth Award of $200 - Association for Computing Machinery
UTC Stock with an approximate value of $2,000 - United Technologies Corporation
2009 - CS011
IMAGE EDITOR FOR SYMBIAN SERIES60 SMARTPHONES
Igor D. Shumilin
Grammar School No. 278, Saint-Petersburg, RUSSIA
ys smartphones and pocket PCs are becoming more and more powerful, their hardware being able t
perform a lot of functions earlier available on full-scale P
Cs only. However, softwar for these small PCs often does not allow to use all the hardware power. <br><br>The project is designed to develop a conveni
ent and powerful graphics editor with wide selection of functions for smartphones under Symbian OS. The currently existing image-editing software for Symbian
has very limited functionality.While smartphoes are now equipped with relatively good photocameras, have high-speed internet connection, comparat
ively large amount of memory and good processor, the potential of this equipment is still not fully used.<br><br>The developed software, for example, allows
making a photo, immediately<br><br>processing and uploading it to the Internet, and all this can be made <br><br>in just a few minutes without using other
computers. The developed editor makes it possible to crop an image, rotate or resize it, correct its brightness or contrast, place a text over the image, draw
other pictures over it and perform a lot of other conversions. The editor has photocamera control functionality and is able to directly get images from it. Also, the
editor supports popular graphics formats.<br><br>The editor has open source code and module structure, which simplifies<br><br>its further improvement, as
well as supports plugin mechanism for <br><br>widening its functionality.
2009 - CS009
IDENTIFICATION BY TEXT ANALYSIS
Yosyp Shvab
T. C. Williams High School, Alexandria, VA
Recent research involving anonymity software has been primarily centered around differing implementations of encryption algorithms. A commonly overlooked
vulnerability is the analysis of typing habits. The following research attempts to explore and prove the importance of this flaw. Through simple analysis of text, it
is theoretically possible to determine if two heterogeneous writing samples have originated from the same author. The solution consists of two parts, text
gathering along with content analysis and it's comparison with a larger pool of text.<br><br>Multiple experiments were set up, all involving a human subject
receiving a topic and typing a paragraph of text, while in the background a specifically crafted program was recording data about their input. All input was
anonymous, and was not stored. However, the data about the input was stored in a database. A randomly selected group of subjects typed two different
samples. Another program transferred the given database into a set of statistics and created a data file. By comparing the graphs of the statistics of different
samples, it was possible to determine which writing samples were written by the same author. <br><br>With further research and a program that covers a
wider range of analysis techniques, it is possible to implement this idea into various desktop applications, such as login prompts. An intruder might know the
password to a system, but will likely not be able to type in the same fashion as the intended user. The same idea may further be applied to email clients.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2009 - CS007
DEVELOPING A COMPUTER ROLE-PLAYING GAME AS A SPANISH LEARNING TOOL
Juan Pablo Solano
Oak Grove High School, Hattiesburg, MS
The purpose of this study was to develop a unique, fully functional computer program for learning and practicing the Spanish language, then measure its impact
on vocabulary acquisition. The researcher used Microsoft Visual Basic.Net to create software which combines the aesthetic experience of video games with an
adventure storyline in Spanish to make language learning more visual, interactive, and motivating. A written pre-test, post-test, and survey was used to
measure the user's level of interest and proficiency gains compared to traditional Spanish learning materials. These results were compared with regard to
gender, ethnicity, and existing Spanish proficiency to bring greater variability to the study. To measure the effectiveness of this program, forty participants were
first asked to complete a written multiple-choice vocabulary pre-test using words and phrases they were likely to encounter in the story. Then, they were asked
to complete a multiple-choice vocabulary post-test in order to measure their knowledge after using the program. While all users improved their vocabulary,
analysis of the pre- and post-test scores indicated that men learned more than women from this program, Asian users learned more than other ethnic groups,
and participants with no prior language instruction learned the most from this program.
2009 - CS034
ENHANCED PARALLEL-KEY CRYPTOGRAPHIC ALGORITHM (E-PCA)
Songpon Teerakanok
Buranarumluk School, Trang, Trang, THAILAND
Symmetric-key cryptography is often used to provide security for most information and communication systems. Essentially, the security of cryptographic
algorithms depends on the size of secret key utilized by systems and users. The larger the key size enlarges, the higher the level of security the system can
achieve. However, the expansion of key size also largely increases the computational complexity of the system. Moreover, in the existing design and
implementation of standard algorithms, key sizes are typically limited to specific key options. For example, AES provides 128, 196 and 256-bit keys. For various
applications in which system security cannot manipulate and/or hardware may have limited processing power, it may be desirable to develop the improved
cryptographic algorithm that consumes small amount of increased processing power. To achieve this goal, we developed the new cryptographic scheme called
“Enhanced Parallel-key Cryptographic Algorithm (E-PCA)” that integrates adapted transformation techniques called “PHT-Chaining” and Linear Feedback Shift
Register (LFSR) based random encryption. The experiment results show that the proposed algorithm provides significantly improved security over the standard
algorithms with the same rate of processing time. Furthermore, E-PCA can be applied in conjunction with any symmetric-key based security systems. It is
convincing that our proposed algorithm can be successfully applied to network security systems and generally used limited resource devices, such as cell
phone and smart cards.
2009 - CS041
INTERACTIVE EVOLUTIONARY COMPUTATION: A COMPARISON OF GENETIC ALGORITHMS
Jesse David Thomason
Arkansas School for Mathematics, Sciences, and the Arts, Hot Springs, AR
This experiment aimed to compare neural-network-based and function-tree-based genetic algorithms which employed interactive evolutionary computation.
These two algorithms were implemented in the context of evolutionary art. The algorithms were then evaluated by human subjects to determine which
produced ideal desktop backgrounds more quickly and which produced overall better desktop backgrounds. The algorithms were run on computers by the
subjects, who were unaware of which algorithm they were evaluating. Each subject utilized both algorithms by running them for four generations each as well
as by choosing the three most ideal desktop backgrounds produced out of images selected from both algorithms. The number of images chosen per generation
per algorithm was analyzed by a scoring system termed alpha. The number of images chosen as best overall per rank per generation per algorithm was
analyzed by a scoring system termed beta. From scoring system alpha it was concluded that the function-tree-based algorithm produced fit solutions more
often than the neural-network-based algorithm; this implied that the function-tree-based algorithm would be more effective in applications valuing the speed of
production of fit solutions over the solutions’ quality. From scoring system beta it was concluded that the neural-network-based algorithm produced solutions
which were more fit overall than did the function-tree-based algorithm; this implied that the neural-network-based algorithm would be more effective in
applications valuing the solutions’ quality over the speed of production of fit solutions.
Awards won at the 2009 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel
2009 - CS023
CATCH IT EARLY: WEB-BASED SCREENING FOR MELANOMA
Thomas Benjamin Thompson
Huron High School, Ann Arbor, MI
Artificial intelligence has the potential to improve early detection rates for melanoma. If caught early melanoma, the most dangerous form of malignant skin
cancer, can be surgically removed with a near 100 percent survival rate. Despite this, melanoma kills more than 60,000 people every year. Identifying earlystage melanoma with the naked eye is exceptionally hard, meaning that the average person is unaware of their melanoma until later stages. An automated
diagnosis system was created using artificial neural networks and principal components analysis. The system screens images for signs of melanoma and
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
produces a positive or negative result. The computer-aided melanoma diagnosis system achieved an overall accuracy of 75%. A freely available website was
created using the system. The website will allow more accurate self-examinations, second-opinions for dermatologists, or clinical use in rural areas or thirdworld countries. Hopefully, this will increase early detection of melanoma and reduce mortality rates.
2009 - CS026
"APPROXIMATE MATHEMATICAL CALCULATIONS IN IMAGE RECOGNITION"
Dmytro Titov
Donetska Spetsializovana Physyko-Matematychna Shkola #35, Donetsk, UKRAINE
Object detection and recognition are very important for current information technologies. The aim of the project was to develop the method of detection of
arbitrary graphical objects applying general methods of approximate mathematical calculations for image recognition, to create the algorithms and to develop a
software program in C++ for image recognition purposes. <br><br>The methods used for testing and software elaboration included the Previtt’s method of
calculation of gradient from table function, the difference derivation formulas of calculation of derivatives from table function, the method of comparison of
graphical objects, the application of patterns for object recognition, the method of contour analysis and neuro network methods, etc. The derivation of general
equations is given for transformation of table discrete functions, as well as the description of these equations application is presented for image development.
As an example, the description of eyes using «active vision» algorithm is provided. The theory of image development is presented in detail and the elaborated
algorithms are explained. <br><br>The scientific novelty of the project is the improvement of the object recognition method by using a gradational segmentation
with the proposed algorithms and elaborated software that permits to detect different graphic objects on arbitrary images. The algorithms and methods
investigated can be also used for development of personal identification systems, in automation of processes, for systems of authorized access, automated
biological monitoring and in robotics. Some of the results of this investigation have been included into the electronic book on object-oriented programming.
2009 - CS307
THE DEVELOPMENT OF A CAD TO FIND ABNORMALITIES IN DIGITAL RADIOGRAPHS: PHASE II
Mary Rochelle Unsworth, Rebekah Unsworth,
Kingswood Academy, Sulphur, LA
The purpose of this project is to improve the previous year’s CAD (Computer Assisted Detection) program, increasing the speed and expanding it so that it can
search not only dental x-rays for abnormalities, but also medical and industrial digital radiographs.<br><br>The Design Criteria for Phase II are:<br><br>1.
Rewrite algorithms to increase accuracy and to reduce number of false positives.<br><br>2.Create algorithms to analyze periapical radiographs.<br><br>3.
Adapt algorithms to scan industrial radiographs.<br><br>4.Completely redesign code with Recursive Logic, increasing speed and accuracy.<br><br>5.
Provide a user-friendly interface that enhances readability of results and reduces the steps required to run the program.<br><br>The CAD works with a 91%
accuracy rate, and has 1.37 false positives per radiograph compared to the 25% accuracy rate with 2.84 false positives from Phase I. Recursive Logic has
reduced the average time from 2.428 seconds to 1.073 seconds, a 44% decrease. Our CAD can now scan periapical radiographs for existing dental work.
Furthermore, work is in progress on industrial radiographs, but more radiographs must be obtained before conclusive results are available.<br><br>This CAD
will enable radiologists to obtain more accurate diagnostics. Studies have shown that CAD technology can improve the accuracy of identification of
abnormalities up to 50%. This could be the difference between a life-saving diagnosis and catastrophe. In industrial radiographs, it can identify cracks and wear
in pipes and welds that cause dangerous chemical leaks, fires, explosions, loss of life, costly repairs, and irreparable damage to the environment.
Awards won at the 2009 ISEF
Team Second Award of $400 for each team member - IEEE Computer Society
2009 - CS302
SMART FRAMEWORK DEVELOPED FOR WEB 2.0
Vojtech Vit, Stepan Sindelar,
Gymnazium, Praha 6, Arabska 14, Praha 6, CZECH REPUBLIC
t technologies used for web applications development on the most widespread platform PHP are advanced but lack mutual compatibility, which causes redun
dant work. Zend Framework, currently the most popular and advanced “base technology” of this platform, consists of dozens of components that can be used
to develop complex object-oriented PHP web applications; however, each of these components is fully independent and thus cannot be used instantly.<br><b
r>Smart Framework is library of programs that interconnects the Zend Framework components and integrates some other popular technologies, such as ORM “
Doctrine”. It includes standards (e.g. folder-structure) and programs that make the development straightforward while preserving an ability to be adaptable to a
specific project needs using a hierarchical configuration system and add-in technology.<br><br>The key part of Smart Framework is provided by application
run manager that guides web applications through their runtime. It performs all repetitive tasks like application configuration, user authentication, caching etc.
Its API was designed to allow maximum extensibility and lucidity, for which design patterns and other techniques related to SW architecture were used. Special
attention was paid to simplify development of multi-lingual and multi-graphical websites as they were found the most demanding to be developed. The whole
system has about 9,700 lines of code covered by unit tests, from which about 5,100 lines belong to API documentation.<br><br>Smart Framework application
in development of large web systems, such as internet portals, CMS, e-learning websites etc., accelerates their design, implementation and testing, which
enables significant work saving and earlier SW delivery.
Awards won at the 2009 ISEF
Second Award of $500 - Association for Computing Machinery
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2009 - CS040
HYBRID LIGHT RENDERING
Matt Swaner Vitelli
Academy for Math, Engineering, and Science, Salt Lake City, UT
Introduction. The project’s purpose was to develop a hybrid model combining the advantages of forward rendering with the efficiencies of deferred rendering. In
computer graphics, “rendering” refers to the process by which images of people or objects are presented on a computer monitor. Forward rendering allows
greater artistic freedom; however it is extremely slow. Deferred rendering allows for high performance; however it is not available on low-end hardware and has
limited artistic freedom. Ideally, a renderer should incorporate high performance, artistic freedom, and support for low-end hardware. <br><br> <br>
<br>Methods. The hybrid renderer was programmed in HLSL and C#. An engine was designed using Microsoft’s XNA Framework as a base. The engine
emulated the core design features of both forward rendering and deferred rendering. The process combined deferred rendering’s fast lighting with forward
rendering’s customization. <br><br> <br><br>Results. Hybrid, deferred, and forward renderings created using the same graphics card, viewing perspective,
number of light sources, and models were compared. Performance was monitored in frames-per-second. Testing showed the hybrid renderer performed higher
than forward rendering and similar to deferred rendering. It can also simulate materials deferred rendering couldn’t. Moreover, the hybrid was able to operate
on low-end hardware. <br><br> <br><br>Discussion. Hundreds to thousands of light sources can be rendered accurately with the speed of deferred rendering
and the benefits of forward rendering. With this program, lighting can now be removed entirely from the scene. The hybrid rendering model is very powerful and
has application to a wide variety of projects, such as video games and simulations.
Awards won at the 2009 ISEF
Fourth Award of $200 - Association for Computing Machinery
Third Award of $1,000 - Computer Science - Presented by Intel
Award of Merit of $250 - Society of Exploration Geophysicists
2009 - CS010
A REALISTIC GRAPHICS TOOL FOR VISUALIZING PHYSICS BASED COMPUTATIONAL DATA
Jesse Xiang-ying Wenren
Tullahoma Hich School, Tullahoma, TN
With the advancement of computers, many physical phenomenons can be simulated numerically. In aviation, brownout is when visibility is reduced due to
airborne particles. Helicopters are vulnerable to brownout operating in the desert. Simulations of brownout can determine safe conditions for takeoff or landing.
RPGs (rocket propelled grenades) have shot down helicopters. A RPG’s trajectory can be altered by sonic pulse, thereby missing its target. By using numerical
simulations developed by computers, data can be acquired simpler, faster, and cheaper. However, the resulting data is understood only by individuals in the
field relating to the simulation. For example, CFD (computational fluid dynamics) scientists are fluent with numerical simulation software, such as Tecplot. The
images that result are not everyday scenes; they consist of curves, contour lines, isosurfaces, etc. <br><br><br> This project attempts to develop a more
general and realistic way of viewing numerical data. The data used in this project is obtained through a CFD code, SAGE flow solver (property of Flow Analysis,
Inc.), which can compute the density of sand at a point in space due to a helicopter’s rotor wake and the pressure on the surface of the RPG due to a sonic
pulse. Using 3D modeling and rendering software, the numerical data is easily visualized. The software used in this project are mental ray© and Autodesk
Maya 8.5. The resulting renders and animations are easily understood by anyone. Combining CFD data with 3D modeling software can transform complicated
numerical data into real life visualizations.
2009 - CS032
CREATING ZINIF: AN INTERPRETED, OBJECT-ORIENTED PROGRAMMING LANGUAGE
Kent Andrew Williams-King
Argyll Centre, Calgary, Alberta, CANADA
In this project, an interpreted, object-oriented programming language, called zinif, was designed primarily around the principle of "everything is an object", and
an interpreter for the language was implemented.<br><br>The language designed uses a syntax similar to that of the C programming language. The
interpreter uses a custom parser, and was not developed using automated parser generation. The interpreter is functional, if not completed. The language
design was found to have slight advantages in simplicity and intuitiveness, when measured against both Ruby and C++, and tied with Ruby in flexibility. Even
when compared to Ruby, which also follows the design philosophy of "everything is an object", zinif had some advantages.<br><br>Due to its design, zinif
should work well as an initial teaching language for novice programmers, and fill the niche of programmers looking for a language that is more flexible than
C++, but who are not willing to go as far as a language like Ruby. zinif was released over the internet over an Open Source development website, but the
author is still awaiting feedback. There are many improvements planned for the interpreter, one being the usage of a big number library for number storage.
Awards won at the 2009 ISEF
Second Award of $500 - IEEE Computer Society
Genius Scholarships - Sierra Nevada College
2009 - CS037
IS THAT PARKING TICKET WARRANTED?
John D. Wu
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Princeton High School, Princeton, NJ
Parking authorities often waste countless hours each day writing down license plate numbers of cars parked in municipal areas in order to check if the cars had
parked for over the time limit. To streamline this process, this system uses computers to compare the collected plate photos to those in a database of pictures
collected from previous rounds. This project aims to establish a low cost solution which can be used by parking authorities to determine if time-based parking
violations have occurred. <br><br>The system consists of three different parts: the camera, the programs which prepare the images, and the programs which
compare the plates. Three separate hardware methods were evaluated for data collection: digital camera, web camera, and a Rabbit camera application kit.
The images are imported to the computer and preprocessed in preparation for comparison. The image is converted to grayscale, a threshold is applied, the
plate is rotated, and the characters are isolated. Cross correlation was utilized to compare the strands of isolated characters in different images to determine if
there was a match. <br><br>The experimental results of 120 images of license plates confirm the robustness of the new innovative system under different
lighting conditions and plate images taken from different angles. When the images of compared plates were taken at a direct angle, 100 percent accuracy is
achieved for plate identification. When the images are taken at an angle, if the characters can be isolated, all potential matches are identified as accurately well.
A system has been successfully created which utilizes a computer to identify the images of license plates.
Awards won at the 2009 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel
2009 - CS031
PERFORMANCE IMPROVEMENT IN ONLINE ANALYTICAL PROCESSING (OLAP)
Michael Yan
Hamilton High School, Chandler, AZ
OLAP is an approach to analyze multi-dimensional data. In today’s highly sophisticated global economy, an array of factors need to be considered when
executing various operations at strategic and operation level in order to complete successfully. OLAP, with its capability of handling multi-dimensional data,
becomes a critical tool that allows management to put these considerations into a mathematical model. Through the model the management can get a clear
picture of how the business is running, as well as the business conditions in the future under likely hypothetical conditions. One of the key issues in OLAP is
performance because analyzing data in a large multi-dimensional space is usually time consuming. Widely used commercial OLAP software include Microsoft
SQL Server Analysis Service and Oracle OLAP, and these software all suffer performance issues in one way or another related to the common underlining
problem of poorly handled calculations. In this project, some common calculations used in OLAP are examined and a novel algorithm with significantly better
performance is proposed. The core technique in the algorithm is to infer the results of the calculation without having to perform the actual calculations. The
inference is done by analyzing the calculation formula symbolically in conjunction with its underlining data. This technique leads to drastically reduce space on
which the actual calculation has to be performed. If the new algorithm can be seamlessly integrated into commercial software products, it will have significant
performance impact for many business applications.
Awards won at the 2009 ISEF
First Award of $1,000 - Mu Alpha Theta, National High School and Two-Year College Mathematics Honor Society
2009 - CS038
SHARING SPECTRUM THE SMART WAY
Angela Yu-Yun Yeung
Davis Senior High School, Davis, CA
While the over-crowdedness of the radio frequency spectrum has become an increasingly noted problem, potential solutions lie in the fact that up to 70% of the
licensed spectrum is actually idle at any given time. Cognitive radio is an emerging technology that enables primary and secondary users in a hierarchical
access structure to interoperate and use radio spectrum efficiently. One of the most pressing issues in such systems, however, is the complex interaction
between distributed secondary users: each user must rely solely on its own observations to optimize the tradeoff between selecting idle channels temporarily
unoccupied by primary users and avoiding collisions with other secondary users. The objective of this research was to gain insights into the interaction between
distributed secondary users and to use this information to design high performance networking policies with low complexity. Specifically, a class of distributed
randomized policies was investigated. Using a Partially Observable Markov Decision Process (POMDP) to formulate the problem, MATLAB was employed to
simulate a proposed multi-user policy in various network settings. Simulation data were then analyzed to draw conclusions about the policy, and subsequently
to improve its performance. A theorem was derived in which the optimum percentage of channels to normalize in any given system could be determined, with
the resulting policy improving implementation efficiency by up to 90% while maintaining the optimal average throughput of the original policy. The policy
detailed in this research can have a wide range of applications including cell phone networks, portable devices, and wireless internet.
2009 - CS018
ORGANIZING AND EXPLORING FILES USING TAGS, LINKS AND ONTOLOGY
Huisu Yun
Daejeon Wolpyeong Middle School, Daejeon, SOUTH KOREA
Most existing file systems use the hierarchical structure to categorize files (i.e. directory-based). In such file systems, a file should be contained in one category
although the file may belong to multiple categories. To resolve this problem, a user can make several copies and store each of them to the corresponding
directory. However, this resolution may cause inefficiency and inconsistence due to the existence of multiple copies in several directories. For instance, since
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
each user may think of a different category scheme and naming convention when the user stores a file and find it, the user has to look through many
directories, causing inconvenience. On the other hand, many operating systems support file searching approaches with names or some metadata. However,
such approaches are too slow to be applied to recent large file storages and limited on specific file formats.<br><br>To solve this problem, I adopt the
approaches of semantic web. To give semantic identities and relationships to files, tags and links are used. The tag is a categorizing keyword, while the link is
used to represent related files. Since multiple tags can be attached to a file, the file can belong to multiple categories. On the tags and the links, an ontology
model organizes them with semantic relationships. With my approach, when the user finds a specific file with a keyword, tags are derived by the ontology
model even though the given keyword is not exactly matched. Consequently, files attaching the tags can be found. Moreover, by links, the user can also easily
identify files which are related with found files.
2009 - CS030
INFORMATION DISTANCE AND ITS IMPLEMENTATION IN SPAM DETECTION
Janice Rosalina Zhang
Plano Senior High School, Plano, TX
Spam is a serious and ever growing problem on the web. Currently, the most effective spam filters rely on the content of the email, but spammers are using
increasingly sophisticated tricks to manipulate their messages, which can make content-based detection ineffective. Information distance is a novel idea in
algorithmic information theory. It is defined as the minimum amount of shared information between any two sequences based on the Kolmogorov complexity.
<br><br> <br><br>This project proposes a new spam detection algorithm based on information distance. Three Java programs were written to implement and
test information distance-based detection: a Bayesian filter to represent current content-based filters on the market, an information distance filter, and (later,
after preliminary testing) a revised information distance filter. The algorithms were evaluated on two publicly available email corpora (the SpamAssassin corpus
and Lingspam corpus) and a private collection of emails from the author’s yahoo account. The first test scenario focused on a real-world emailing and was a
basic test under normalized conditions. The second and third test scenarios focused on the weaknesses found with content-based detection that resulted from
the manipulation of email content.<br><br> <br><br>All in all, information distance proves to be a more reliable detector of spam than content-based filters, as
well as less likely to result in false positives. Not only is it effective under regular emailing conditions, but when tested for the flaws with content-based
detection, it proves unwaveringly accurate.
Awards won at the 2009 ISEF
Tuition Scholarship of $150,000 - Drexel University
Scholarship Award of $12,500 per year, renewable annually - Florida Institute of Technology
Award of three $1,000 U.S. Savings Bonds, a certificate of achievement and a gold medallion. - United States Army
2009 - CS022
THE APPLICATION OF ARTIFICIAL NEURAL NETWORKS TO THE HEURISTIC ANALYSIS OF COMPUTER VIRUSES
Nelson Zhang
Shanghai American School, Shanghai, CHINA
This project tackled two major challenges to antiviral technology: the detection of metamorphic and polymorphic viruses. It was hypothesized that distinctive
patterns in viral code could be revealed using a self organizing map with the octal dumps of viruses, and that an artificial neural network could be trained to
recognize such patterns. Such a system might classify suspected programs with higher accuracy than signature detection and heuristic analysis methods.<br>
<br> Octal dumps were generated from 100 executables and input into Kohonen self organizing maps. Each Kohonen map thus became a topographical
representation of the input code. Distinct graphical patterns were immediately noticeable within the maps, and could be represented by features such as cluster
count and average cluster size.<br><br> Each map was converted into its feature vector and input into a multilayer perceptron and a “committee of machines,”
consisting of six sub-networks. Both networks were trained through a supervised back-propagation algorithm. When tested against validation inputs, they
classified viruses and harmless programs with average accuracies of 97.2% and 96.0% respectively. Two certified antiviral programs tested had average
accuracies of 89.0% and 88.0%.<br><br> The results of this project demonstrate that an antivirus system utilizing a combination of Kohonen networks and
multilayer perceptrons can reveal and recognize patterns inherent in either viruses or legitimate programs, and is far more effective in detecting self modifying
viruses than common signature detection and heuristic analysis methods.
Awards won at the 2009 ISEF
Third Award of $300 - Association for Computing Machinery
Second Award of $1,500 - Computer Science - Presented by Intel
2009 - CS015
VISION: A MOBILE SOCIAL NETWORKING PLATFORM BASED ON CLOUD COMPUTING AND AUGMENTED REALITY
Xiaowei Zhu
No. 2 Secondary School Attached to East China Normal University, Shanghai, CHINA
Vision is a novel social networking platform specially designed for mobile phones. The basic idea of Vision is to enable mobile phone users to retrieve more
abundant and vivid information through a flawless integration of the information in virtual world with that exists in real world.<br><br> Through Vision, mobile
phone users can publish their contents in various ways: updating the status, uploading photos and videos etc. Since Vision is a cloud-based platform, mobile
phone users needn’t worry about the limited storage and computing power of mobile phones. Besides, developers can also use Vision APIs to develop new
mobile social networking applications so that the usages of mobile phones can be well enriched.<br><br> The novelty of Vision lies in its unique way of
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
information integration. Through embedded GPS and digital compass inside the mobile phone, Vision is able to accurately get the location information and
orientation information. That information and other associated contents will then be integrated into the images captured by the mobile phone camera according
to the V-Filter algorithm invented by the author. The overlaid contents will be updated timely and automatically when a user moves.<br><br> As a result, Vision
enables a wide range of mobile usages through this kind of information integration between virtual world and real world.
Awards won at the 2009 ISEF
First Award of $1,000 - Association for Computing Machinery
2010 - CS029
90'S TO 10'S: REUSING COMPUTERS IN SMART WAY
Yazan Mohammed Alshehri
AlBassam Schools, Dammam, Eastern Province, SAUDI ARABIA
Annually, a huge number of old workstations continue to be stacking in dumpsters. Many Computer users are buying computers with capabilities that
significantly exceed their use. The purpose of the project was to achieve high grade applications with a low cost solution for computer users, corporations,
schools, and developers. This could be implemented using old computers refurbished with efficient operating systems. This hypothesis might be achieved using
open-source software that focuses on a small foot-print OS with a smart Kernel of high security resources, besides using low system resources and, most
significantly, without altering the hardware. The experimentations performed to test this hypothesis included testing the difference between our relatively old
and cheap systems with new and expensive ones. In addition, a survey was conducted to determine the computer categories utilized by users. Testing of
operating systems and software platforms were carried out hundreds of times to check the validity of the hypothesis. Finally, heavy duty applications in our
systems were installed to test their reliability. It is concluded that the best usage platforms of old computers are Cloud-web browsing, servlet and developing
platforms. The application of this efficient, yet simple, solution covers a large volume of hardware that the society seeks to get rid of in a creative manner. <br>
<br>
2010 - CS030
FROBENIUS PROBLEM ADVANTAGES OVER ENCRYPTING METHODS
Hassan Mohammed Alsibyani
Dar Alfikr Schools, Jeddah, SAUDI ARABIA
Encryption methods have been consistently improving with the goal of making it harder to break the codes. This research studies the Frobenius problem and
how it can be applied to computer science encryption aspects to create a stronger methodology. The key points during the procedure of this investigation have
been; One; numbers have been selected for corresponding letters, then its Frobenius numbers were identified using Rodseth Algorithm with some random
multiplication done over the public key. Two; the finite solutions of the Frobenius equation were identified and one of them was selected and its Frobenius
number has been similarly identified thereby creating the private key and an additional Frobenious equation. Three; the recipient solves the equation and a
finite set of solutions will appear. Then by using the private key, it can be determined the lower and upper equations and the right solution. The results from the
math experiments proved this methodology efficient at a certain level. These efficiency of these results may be affected by the partities of the numbers and its
relative primarily. The conclusion of this project is that the Frobenius problem can positively affect the encryption methods due to the nature of its complexity in
polynomial time calculations. Based on future studies, this methodology may be applied to more complex systems and becomes even more effective on a wider
scale.
2010 - CS014
OPTIMIZING THE DEFLATION STRATEGY OF A DIVIDE AND CONQUER EIGENSOLVER WITH A GENETIC ALGORITHM
Matthew R. Bauerle
Bauerle Homeschool, Fenton, MI
A classic problem in numerical linear algebra is the calculation of the eigenvalues of a symmetric matrix. These eigenvalues are useful in calculating critical
frequencies in many systems, such as vibrations in mass spring systems. Not much research has been done on the deflating criteria in the divide and conquer
eigenvalue algorithm. My hypothesis is that a better deflating criterion than the currently used constant threshold exists, and that the deflating threshold will be
a function of the size of the matrix. A genetic algorithm is used to search for this better function. The genetic algorithm (GA) in 12 out of 18 runs (on 100 random
matrices) showed that a logarithmic function is better than a constant function. A quadratic fit yielded a logarithmic deflating function that was optimal for
general matrices, not just the 100 random matrices used in each run of the GA. Data collected with this new function showed acceptable errors (1.73 accurate
digits versus the average 1.88 computed with the GA). With this function, the divide and conquer method yielded 2.61 accurate digits on larger matrices. In
conclusion, the hypothesis was correct, and the new logarithmic deflating function is better than a constant function.
Awards won at the 2010 ISEF
Renewable Scholarships to the IIT College of Science and Letters - Illinois Institute of Technology
2010 - CS048
NEUROSLAB RAPID APPLICATION DEVELOPMENT FOR ARTIFICIAL INTELLIGENCE
Ionut Alexandru Budisteanu
National College "Mircea cel Batran", Ramnicu Valcea, Romania, ROMANIA
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
You can create Artificial Neural Network(ANN), Multi Layer Perceptron with drag&drop, simulate and learn the created ANN. We can see 2D&3D too.<br><br>
To understand the project, I made some examples: Optical Characer Recognition (for handwriting), Face recognition, Speech recognition, Approximate
functions. <br><br>What else can we do: Fingerprint recognition, Retina recognition, Disease from X-ray (for example:Cancer etc), Forecast, Eye-Tracking ,
User-Computer interface<br><br>Difference between other software and NeurosLab<br><br>After we learn the ANN, if we want to create a program, which
simulates the learned ANN, we can export the ANN Weight Matrix, and Bias Array and export a template class for Delphi and Pascal source or a Dynamic Link
Library(DLL) which simulates the ANN, so we can use my software in all of the IDEs which allow to import functions from a DLL or load/save XML. <br><br>
The idea is that if we want to develop, we don't need to know how to write a software with Artificial Intelligence, so we don't need to write the simulating and
learning algorithm, we just use NeurosLab, which exports the weight and generates a source code.<br><br>If we aren't programmers we only simulate and
learn an ANN, in NeurosLab.For example, if we are doctors, and we want to make a software that detects Cancer, from X-rays, we just need some experience
with NeurosLab(we don't need to know how to write the source code and the math theory behind it), so that anybody can use NeurosLab.
Awards won at the 2010 ISEF
First Award of $1,000 - Association for Computing Machinery
Fourth Award of $500 - Computer Science - Presented by Intel
2010 - CS003
OFF-LINE CHARACTER RECOGNITION USING VECTOR TRAINED ARTIFICIAL NEURAL NETWORKS
Matthew Joseph Chang
Chang Home School, Austin, TX
rpose of this project was to improve on the current artificial neural network based method for off-line character recognition. At the onset of research, the stand
ard pixel based approach was implemented as a benchmark for improvement. After exploring and testing multiple theories, a system was developed that was a
ble to significantly improve both recognition for handwritten characters and training times. The system consists of three key steps: <br><br>1.Analyze the input
bitmap and detect representative points<br><br>2.Generate the average of representative points, and express the points as vectors originating at the avera
ge<br><br>3.Train an artificial neural network using the vectors as inputs and backwards propagation as the training method<br><br>The above method overc
omes the shortcomings of the conventional pixel based method. While the pixel based method is accurate on typed text or otherwise consistently formed
characters, it fails to recognize more variable handwritten characters. What makes the above method more effective for handwritten characters is that, unlike
the pixel based method, it is size and location independent, which is a major innovation. After testing on a database of handwritten character samples, it was
found that the vector based method was up to 30% more accurate, and trained an average of 25 times faster than the pixel based method. This technology has
implications far beyond simple character recognition. Using the concepts of input reduction in this method, great improvements can be made in camera based
vision, and an algorithm for general object detection becomes a real possibility.
Awards won at the 2010 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel
2010 - CS049
RECOGNITION OF SOUND SAMPLE SEQUENCES USING WAVEFORM ANALYSIS: DETECTING STUTTER WORDS IN AN AUDIO STREAM
Wisdom Liang Chen
The Alabama School of Fine Arts, Birmingham, AL
The focus of this research is an investigation into waveform analysis of an existing audio file. The investigation will develop and test an algorithm that identifies
a given specified word/section within an audio stream file. The user will be prompted to pick a word that needs to be searched for either through a .wav file or
through specifying a section within the currently represented audio file. After the word specification is set, an audio file can be searched through. The possible
word transitions and sections matching the desired word will be marked and tagged in a visual manner on the audio stream, allowing the user to see where the
positions if the target sound was found. Once the matches have been identified, the developed software will allow the user to play each section to see if there
are any false positives within the search. Several classes were created in helping to develop the search method. The research included an investigation into
new algorithms to search large audio streams for sample sounds. An application of the project was used to identify stutter words uttered during an interview.
2010 - CS051
1 SUN + 8 BITS = H2O: DIGITALLY OPTIMIZING "SMART" PHOTOVOLTAICS FOR A WATER DISTILLATION APPLICATION
Nicholas Mycroft Christensen
Wetumpka High School, Wetumpka, AL
After researching hydrology, water contamination, photovoltaics, Peltier devices, electrical engineering, assembly language, and microprocessors, I worked
with an electrical engineer to design a customized circuit board with pulse-width modulators, analog-to-digital converter, thermistor, photocell, and radio. I
programmed a PIC18F8722 microprocessor in assembly to optimize the power of a solar panel by tracking the maximum power point in real time and using
switches to control the peak performance. This microprocessor allows two interrupts in the program, necessary for updating data on the power output of the
solar panel. Additionally, the board has a radio link to transmit operational and environmental data in real time.<br><br>A specific application, Phase I, is a
solar-powered water distillation device, using a Peltier thermoelectric cooler and fans, that distills potable water from contaminated water. While the distillation
device worked occasionally, yielding over a liter several times and up to 2.2 liters in one day, the results were inconsistent. The un-optimized solar panels have
proven to be unreliable, with minimal voltage readings, even in midday, and no voltages over 7.2 volts even though it is rated 12 volts. As there was a long
delay in the production of the customized circuit board, Phase II comparative studies with the optimized system are still in progress.<br><br>If results prove
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
promising, an immediate application is the production of inexpensive point-of-use devices that could be used in developing countries or emergency scenarios.
More importantly, the microprocessor-enhanced solar panels have thousands of applications and could be networked for solar farms.<br><br>
Awards won at the 2010 ISEF
Second Award of $1,500 - Air Force Research Laboratory on behalf of the United States Air Force
2010 - CS012
AUTOMATED CHARACTERIZATION OF ULTRA-LIGHT METAL FOAM FROM ΜCT RECONSTRUCTED IMAGES
Zheng Fu Edrei Chua
National Junior College, Singapore, SINGAPORE
Ultra-light metal foams, such as aluminium foams, have found uses in many novel applications due to their cellular structure which allows for high stiffness in
conjunction with very low specific weight. Characterizing the internal structure of these foams is necessary for a more complete understanding of how structural
properties such as pore size affect the foams’ macroscopic material properties. X-ray micro computed tomography (µCT) offers a non-destructive means of
achieving this goal. However, manual analysis of µCT slice images is time-consuming and often unreliable. To achieve automation, a customized algorithm was
designed to segment µCT slice images of aluminium foams to replace manual segmentation. A list of functionally important 2D parameters was also defined
based on a literature review. Matlab functions to extract these parameters were developed. Semi-automated image registration was also implemented. Finally,
characterization of pore deformation during compression was presented as a novel approach to investigate mechanical properties of foams, and as a possible
application of the automated processing. Overall, results showed that automated processing is successful in characterizing and analyzing CT slice images of
metal foams. Comparisons with manual segmentation revealed a good degree of accuracy with average porosity error being approximately 2%. The difference
between the same statistical moments derived from automated and manual segmentation shows a high level of consistency, which could have potential use in
improving the accuracy of parameters measured from automated segmentation. In terms of speed, automation reduces time required per slice image by 18
times to approximately 1.5 minutes.
Awards won at the 2010 ISEF
Third award of $150 - Society for Applied Spectroscopy, Northern California Section
2010 - CS038
CRYPTODEFENDER: A SMART WAY TO PROTECT YOUR FILES
Tin Lok Chung
Maryknoll Fathers' School, Hong Kong, HONG KONG
The number concerning cases information leakage has been increased mainly due to users’ poor habit. Losses of USB, careless usage of P2P software and
not encrypt files before saving are often resulted in information leakage. Therefore, CryptoDefender is designed to protect files from directly shown to public
and to change user’s habit in handling confidential information. CryptoDefender is an add-in for application software, e.g. Word, Excel, etc. It will be a new tab
in the ribbon which provides the software an alternative for saving and opening a file. It protects the files in two ways: encryption and steganography. The data
is first encrypted into cipher text as these encrypted data will be hidden in an image file to lower suspicion of readers. All of these procedures will be done by
one click. For encrypt and decrypt, the user just needs to click the “Protect” and “Decrypt”. An extremely fast way to encrypt the file is “Quick Protect”. It will
randomly choose an image from the image folder according to their file size. This function saves the time for choosing an image. In conclusion, CryptoDefender
is user-friendly and it helps protecting the file anytime. Hence, users may change their habit: encrypt the file when saving it to avoid data leakage.
2010 - CS028
SILENT AS A NINJA: NETWORK SECURITY MADE EASY
Jordan Robert Conlin
Wasatch High School, Heber City, UT
Computers are a powerful tool, and are used in nearly every aspect of our everyday lives. But not all users take the necessary steps to protect their computers.
Computer security may be neglected for a number of reasons. For example, proprietary security software is expensive and must be upgraded periodically.
Running any kind of security software slows down computers and network connections, and many users find that configuring security software and hardware is
simply too confusing.<br><br>I developed a home security solution called the Ninja Network Appliance. I used java to create a user-friendly web interface. I
was also able to find an inexpensive computer that has very low power consumption. All of the software used in my project is open source, allowing me to keep
the overall cost low. The Ninja Network Appliance protects all of the computers on a network without slowing them down because the Ninja Network Appliance
does all of the work normally done by software and remains invisible to attackers. This is accomplished by using the Snort IDS to look for threats on the local
network. Any threats found by snort are logged to a MySql Database. My java app continuously reads the database and takes action based on user
preferences. If necessary, my app is capable of disabling the entire network to prevent the spread of malicious software. Although this method may seem
primitive, the result is a device that is economical, effective, and easy to use.
2010 - CS301
MY FUTURE, MY ROBOT
Gatlin Wayne Douglas, Viviana Salzar, Wesley White
Odessa Senior High School, Odessa, TX
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Imagine your future… with the possibility of not having to risk people’s lives to fight fires. The future in store for today’s students is full of beings with artificial
intelligence and we will be there. The teenagers of today will not only be there, they will be the generation to design and utilize this technology. This project will
take on the task of creating and designing a robot to contribute to the scientific community and help solve the problem of fighting fire without risking the lives of
our citizens. The robot will be designed and built by a group of third year computer science students. The robot will run through a scaled down house and
extinguish a simulated fire autonomously, but through these experiments with given materials it was found not to be possible in situations pertaining to a snokey
house fire. After reworking the direction of the project, the decision was made to include a human interface to allow human interference and priority. The
robotics team will accomplish the following: First, execute the preliminary stages of design in both physical and programming senses; finding what the robot
must do to succeed, and developing the source code using the language RobotC. Second, during development of the robot and its programming the team will
apply the rules of software engineering and development from analysis to implementation of the code. Finally, following the completion of both construction of
the robot and completion of the code, the robot will be tested extensively within a practice arena set to standards published for the Trinity Fire fighting Contest,
find and open flame and extinguishing it saving life and property with the ability to be controlled by a human.<br><br>After completing all stages the team has
successfully created a robot which is able to be navigated through a house and extinguish a fire.
2010 - CS004
EXAMINING THE EFFECTS OF DIFFERENT RANDOM NUMBER GENERATORS ON PERLIN NOISE WITH DIFFERENT NUMBER DISTRIBUTIONS
Jacob Allen Dufault
Canterbury School, Fort Myers, FL
Perlin Noise aids in the generation of clouds, three-dimensional terrain, and enhances the visual fidelity of images. This project examines how different noise
functions affect Perlin Noise’s texture output. The researcher employed random number generators mt19937, mt11213b, ecuyer1988, hellekalek1995,
kruetzer1986, and taus88 for noise functions. Additionally, the researcher made each noise function generate both normal and uniform number distributions. A
normal distribution is a bell curve; the probability of each number appearing differs. In a uniform distribution, the probability of each number appearing is equal.
<br><br> The researcher hypothesized that statistically significant variation would occur in the texture generated by Perlin Noise. The experimental data did not
support this hypothesis because the data showed similar graphs, standard deviation, mean, median, and inter-quartile range from each noise function. The
researcher also conducted chi-squared tests of homogeneity on each number distribution and was unable to reject the null hypothesis that the noise functions
generated the same number distributions. Additionally, the experiment demonstrated that number distribution (normal / uniform) does not severely affect Perlin
Noise’s output, excluding scale. Therefore, Perlin Noise is extremely resilient to different noise functions. Because of this research, Perlin Noise can be sped up
and run in time-sensitive code by using a substandard random number generator, while still outputting high-quality results. Two-fold speedups were noticed.
<br><br>
Awards won at the 2010 ISEF
HM - American Statistical Association
2010 - CS009
BIOINFORMATIC IDENTIFICATION OF NOVEL PROTEIN DOMAINS, YEAR TWO
Kamakshi Lajvanthi Duvvuru
Little Rock Central High School, Little Rock, AR
ination of protein function is one of the most challenging problems in the postgenomic Era as many proteins, called hypothetical proteins, remain unkno
wn for function. The purpose of this experiment is to discover any novel structural fields, or domains, that may be present in hypothetical proteins through large
-scale sequence analysis. <br><br> Of the 11 million hypothetical protein sequences downloaded from NCBI in the previous year, 2500 of them were compared
to each other for sequence similarity using BLAST this year. The similar regions, called features, are potential domains. The accuracy of the algorithm finding
features, which was tested with 18 multidomain proteins with known domains, was found to have a relatively high sensitivity and a reasonable specificitystatistics favorable for this project. Therefore, a program capable of accurately identifying potential novel domains in a dataset of 11 million proteins has been
developed.
2010 - CS045
GRAMMAR AGNOSTIC SYNTACTICAL VALIDATION OF NATURAL LANGUAGE
Tucker Sterling Elliott
Suncoast Community High School, Riviera Beach, FL
The purpose of this research is to create a grammar agnostic program in C# for use in confirming the validity of sentences in English by using established word
patterns found in a large source of virtually entirely grammatically correct sentences. A program was designed to parse a large reference text of grammatically
correct sentences and tally up the frequency of every single word, two word, and three word permutation. Using this data, a program was designed to take an
inputted sentence and compare the one, two, and three word phrases found within the input to the frequency of these phrases in the reference text. Commonly
appearing phrases in the reference text are assumed to be valid (meaning that a reader can discern meaning from them) by the program. Input sentences'
validity are determined by reporting how common the phrases within the input are. It was found that, while initial builds of the program were slow (taking about
40 seconds on a high-end machine to produce results for a 4 word sentence), the program was able to differentiate between valid and invalid sentences, giving
higher scores to the valid sentences. While the concept of grammar agnostic validation was shown to be effective, the program will need additional work,
reducing loading times and adding new features such as allowing users to add their own sentences to the reference text as valid or invalid, before it would be
feasible in real-world applications such as word processor grammar checkers and plagiarism detectors. Later revisions of the program have drastically lowered
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
processing times to almost one second on a high-end machine for a 4 word sentence. This general approach could have a wide variety of uses in linguistics as
well as other fields such as bioinformatics.
Awards won at the 2010 ISEF
Second Award of $150 - Patent and Trademark Office Society
2010 - CS041
MODELING ELECTRICAL DEMAND USING ARTIFICIAL NEURAL NETWORKS FOR SUSTAINABLE POWER GENERATION
Andrew William Ellis
Westfield High School, Westfield, MA
The purpose of this experiment was to determine the effectiveness of Artificial Neural Networks (ANNs) in predicting the hourly electrical demand on the New
England power grid based on data from ISO New England. <br><br> The software program MATLAB was used to create, train and test the ANNs by
determining the relationships between the input variables (for example: year, month, day of the week, hour, temperature, dew point, etc.) and the output (hourly
demand). ANNs were chosen because of their ability to learn complex, non-linear relationships between the inputs and outputs directly from the data, as
compared to traditional models in which humans choose the functional form. Three different ANN models were developed and trained a total of 92 times. These
models were compared to three different Multiple Linear Regression models fitted to the same data sets. <br><br> It was found that the best ANN could predict
future demand with great accuracy as measured by a coefficient of determination value (r squared) of 0.999. This easily outperformed Multiple Linear
Regression which only had an r squared value of 0.987. Through a sensitivity analysis, the ANN was also used to model the individual effects of each variable
on the hourly electrical demand. Each sensitivity graph was consistent with expected trends, and the variable with the greatest effect on hourly demand was the
historical demand value from the previous hour. <br><br> The predictive value of the ANNs means that the electrical power industry can operate more
efficiently and reduce the harmful impact of fossil fuels on the environment.
2010 - CS043
AUTOMATIC PARALLELIZATION THROUGH DYNAMIC ANALYSIS
Kevin Michael Ellis
Catlin Gabel, Portland, OR
Parallel programming is necessary both for high-performance computing and for continued performance increases in consumer hardware. To advance parallel
programming, we develop an automatically parallelizing tool called Dyn. Dyn uses novel dynamic analyses to perform data dependency analysis, data
privatization, control flow analysis, and profile the program. The data from these analyses is used to generate parallel C code from a serial C program and is
capable of targeting a variety of implementations of parallelism, currently multicore via OpenMP, GPU via CUDA, and clusters thereof. Dyn also uses its own
new software distributed memory system which, in concert with profiling data, will automatically tune itself to the cluster in question. We develop a set of
benchmarks which would be difficult to automatically parallelize using conventional static analysis, yet we show to be easily automatically parallelizable using
our dynamic analysis. We also test Dyn against scientific computing libraries and applications, achieving speedups comparable to, and occasionally exceeding,
those obtained by manual parallelization. We also develop a formal system for describing dynamic analysis and parallel computing known as the Calculus of
Parallel States. We prove semantics preservation with respect to parallelization of terms without data dependencies. Our final result is a dynamic-analysis
based method of automatic parallelization and a rigorous mathematical theory to support it.
Awards won at the 2010 ISEF
Second Award of $500 - Association for Computing Machinery
Second Award of $500 U.S. savings bond - AVASC-Ashtavadhani Vidwan Ambati Subbaraya Chetty Foundation
First Award of $1,000 - IEEE Computer Society
A Scholarship of $50,000 - Intel Foundation Young Scientist Award
Intel ISEF Best of Category Award of $5,000 for Top First Place Winner - Computer Science - Presented by Intel
First Award of $1,500 - Symantec Corporation
Award of three $1,000 U.S. Savings Bonds, a certificate of achievement and a gold medallion. - United States Army
2010 - CS044
MODELING BULLET PATHS THROUGH A HAWKER HURRICANE'S WING
Aaron Gabriel Feldman
Charter School of Wilmington, Wilmington, DE
The purpose of this project was to determine the probability of a round from a MG-FF 20 mm cannon hitting the wing of a Hawker Hurricane (a British World
War II fighter plane) but passing through the wing without hitting the internal structure. This is significant because the Hawker Hurricane’s early versions (19351939) were built with fabric-covered wings, and some historians note that that may have been advantageous over stressed-skin metal wings because the highexplosive rounds fired from standard German MG-FF 20 mm cannon could have passed through the wing without hitting the internal structure and, therefore,
without detonating. This would cause significantly less damage. A Java program was written to simulate millions of different bullet paths through the Hurricane’s
wing; the program determined whether or not each specific path would intersect with the wing’s internal structure, and then that information was used to
calculate the probability of a bullet hitting the wing but missing the internal skeleton (a Monte Carlo method). Although the hypothesis for this experiment was
that the probability would never be greater than 10%, this was disproved as the results consisted of probabilities ranging from 14.3% to 55.7%, depending on
the angle. Thus, this project provides statistical backing for the idea that bullets would often pass through the Hurricane’s fabric-covered wing, and thereby this
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
project adds to our society’s understanding of history and of such pivotal battles as the Battle of Britain.
2010 - CS007
A NOVEL APPROACH TO TEXT COMPRESSION USING N-GRAMS
Dylan Freedman
Carmel High School, Carmel, CA
This project investigated a novel approach to text compression using principles of natural language processing. After observing traditional algorithms compress
small text files and yield inefficient results due to a lack of useful redundant content, I wondered whether compression exploiting redundancies inherent in
human language could offer any advantages. To implement this concept, I first obtained a copy of Google’s English N-Gram database, a comprehensive
linguistic model for examining how often commonly observed sequences of words occur. To extract useful information, I optimized this database and sorted it
alphabetically and by frequency so that information could be retrieved efficiently through a binary search. I then wrote an undemanding program able to quickly
deduce the relative probability of a word occurring given a few preceding words as context. Compression was achieved by first converting each word of an
input file into a ranking sorted by the word’s respective probability of occurrence compared to other words that could have occurred. Then, preexisting
compression algorithms were applied to the resultant rankings of the encoded file. This algorithm significantly outperformed multiple existing compression
algorithms, working particularly well in comparison with other methods when compressing small English text files. Even files less than a few hundred bytes
compressed to an average of 25% of their original size, an unprecedented ratio. With increased global dissemination of small text files such as those used in
cell phone text messages, emails, and social networking, this method implemented properly could significantly reduce the environmental strains of data storage
worldwide.
Awards won at the 2010 ISEF
Second Award of $500 - IEEE Computer Society
Tuition Scholarship Award in the amount of $8,000 - Office of Naval Research on behalf of the United States Navy and Marine Corps
2010 - CS002
ELLIPTIC CURVE CRYPTOGRAPHY
Alexander Sergeevich Gandzyura
Ridgeview High School, Orange Park, FL
Cryptography is a study of hiding information. There are lot implementations to it, including providing security to homeland. In past 20 years, RSA algorithm was
used to provide security with asymmetric cryptography for communication networks, applications, internet protocols, and GPS systems. It was believed that
RSA cannot be breached, but over the years, new processors speed tripled, and RSA doesn’t provide any more security. There is a new generation encryption:
Elliptic Curve Cryptography (ECC).Objective of this project is to make cryptanalysis of ECC and provide prove of why ECC is new generation cryptography. In
this project, the algorithm was broke down in multiple steps for exploring mathematical puzzle. To use ECC practically, Instant Messenger program was created
in visual basic.net. It encrypts/decrypts conversations between two parties, using 256 bit size key and guarantees 100% of conversations security. Uniqueness
of “ECC Instant Messenger v1” is that it encrypts/decrypts whole incoming/out coming stream of data. After ECC agreement had been made, program uses this
agreement as AES symmetric key. In reality, none of parties don’t know this key. Every new session, program creates new secret agreement and new
symmetric key. All interprocesses and information are hashed with “Sha1”. No “man in between” attack can be possible because server accepts only one
connection. For multiple users ECC digital signature –certificate had been created. As the result, “ECC Instant Messenger v1” is a unique Instant Messenger
that provides double layer security option for the all purposes communication systems.
2010 - CS010
MATRIX BASED DISCRETE LOGARITHMS PUBLIC KEY SYSTEM (MBDL) AND ITS APPLICATION IN SECURID
Yang Gao
Northeast Yucai School, Shenyang, Liaoning, CHINA
Public key system is one of the core technologies of information security, but now it is inefficient, and has restricted its application on the mobile equipments.
The MBDL is designed to accelerate public key system’s speed.<br><br> First, I gives the description of the algorithm and discusses some key parameters’
choosing method. In the security analysis, first I analyze five attacking methods, and compare its security with RSA and ECC; afterwards I simulate the
attacker’s behavior and carry out a regression analysis. In the speed test, first I analyze its computing feature and compare its speed with that of ElGamal and
ECC. Finally, I implement it on the computer and measure its speed.<br><br> In order to compare its application value with that of RSA, I implement the
SecurID with the core of MBDL and RSA on C51 single chip respectively. The test result shows that the MBDL is 1000 times faster than RSA. So MBDL can
help to popularize and promote information security.<br><br> In conclusion, MBDL has the same security as ECC but its speed has reached 350KB/S, about
200 times faster than that of ECC. The MBDL requires less in hardware in comparison with RSA, so it has a fine application prospect.
Awards won at the 2010 ISEF
Fourth Award of $200 - Association for Computing Machinery
All expense paid trip to tour CERN - European Organization for Nuclear Research-CERN
Third Award of $1,000 - Computer Science - Presented by Intel
2010 - CS015
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
ABREADER: AN APPLICATION TO ANALYZE, INTERPOLATE AND EXTRAPOLATE DATA FROM AUDITORY BRAINSTEM RESPONSES
Ankush M. Gupta
duPont Manual High School, Louisville, KY
The lack of advances in the analysis of auditory brainstem response (ABR) data has significantly stalled the development of drugs and other breakthroughs
which could lead to improvements in the hearing ability in humans and animals.<br><br>The ABReader application aims to alleviate the gaps in functionality in
the software currently available by allowing professional researchers to simultaneously compare ABR data of different specimens and tests as well as receive
statistical information of value from the results. This allows them to quantifiably analyze the results opposed to attempting qualitative analyzations of the graphs
and performing invasive and usually lethal surgeries.<br><br>ABReader was created and it provides an easily usable interface for importing test data, the
ability to store the test data into a database for later retrieval and usage, the ability to selectively graph test data, the ability to obtain statistical information in
order to quantify the variance between tests, and other features of importance to researchers. ABReader provides a significant advantage compared to existing
solutions available to researchers in both usability and the features provided.<br><br>Presently, researchers are using ABReader to analyze the effects of a
folic acid treatment on hyperhomocysteinemic mice - using hearing ability as a benchmark. The software previously being used by the researchers did not allow
for comparative analysis of the data, but ABReader is now being utilized resulting in accurate quantitative data.
2010 - CS031
THE LIMITS OF ARTIFICIAL INTELLIGENCE: CAN A COMPUTER REALLY UNDERSTAND AN ENGLISH ESSAY?
Nicholas Hansen
Carbon High School, Price, UT
Computer software programs are increasingly used to evaluate student writing, giving quantitative scores on qualities such as persuasiveness, genuiness, and
the willingness of the writer to take risks. But can a computer really understand human writing? If a computer program purports to evaluate English writing for
both technical quality and meaning, then the results of the software evaluation should correlate to human evaluation.<br><br>In order to test the validity of
computer evaluation of expository writing, a name branded writing assessment software program was used to evaluate the following essays: 1) Abraham
Lincoln’s Gettysburg Address, 2) An excerpt from Mark Twain’s essay On the Decay of the Art of Lying and 3) a pseudo-essay on the monkey threat, designed
to meet the criteria of the program for quality, but which would probably be considered poor by human judges.<br><br>The results of the evaluation were
evaluated, compared, and analyzed to clarify the ability and limits of artificial intelligence to evaluate English prose. Abraham Lincoln’s Gettysburg Address
scored 20 out of 30 points. Mark Twain’s essay on the art of lying scored 23 out of 30 points. The pseudo-essay on the monkey threat scored 28 out of 30
points. <br><br>The name branded writing assessment program was unable to distinguish a real English essay from a pseudo-essay. It ranked the pseudoessay as being of higher quality than the real essays. The computer feedback appeared to intelligently respond to the ideas and meaning of the writing, but this
was only an imitation.
Awards won at the 2010 ISEF
Genius Scholarships - Sierra Nevada College
2010 - CS042
HEAD-GUIDED WHEELCHAIR CONTROL SYSTEM
John Blood Hinkel, III
Hopkinton High School, Hopkinton, MA
Many disabled individuals confined to a wheelchair do not have the use of their hands, thereby impeding their ability to control a motorized wheelchair.
Available systems that accomplish wheelchair control utilize potentially inhibitive peripheral devices. The purpose of this project was to design a system that
allows disabled individuals to control a motorized wheelchair with simple head movements. This system consists of a self-designed program, headset, and a
control module that interfaces with a motorized wheelchair joystick.<br><br>A program was written that translates accelerometer movements from a headset
into servomotor movements that operate the joystick control module. The program also enhances the effectiveness of the module with an emergency stop
(accounting for sudden unconsciousness) and a check box allowing the user to deactivate the system when necessary. To prevent unwanted movement,
deactivation halts all wheelchair motion by instructing the module to return the joystick to its neutral position.<br><br>The joystick control module was
designed to minimize the effects of friction on the joystick's motion. This was accomplished using miniature drawer slides to aid movement of the joystick, using
pulleys that rotated on ball bearings as part of the module's string-drive system, and by carefully aligning the string-drive system for ease of motion of the
servomotors.<br><br>The entire system accomplished the majority of design criteria. The system is compact and easily adaptable to different motorized
wheelchairs. It is also non-invasive and unobtrusive, as the program provides comfortable control to the user by interpreting natural head movements.
2010 - CS305
OPTICAL RECOGNITION HAND SIGNS
Irma Nayely Jara Diaz de Leon, Tania Janeth Rodriguez Diaz de Leon,
Centro de Bachillerato Tecnologico, industrial y de Servicios No.168, Aguascalientes, Aguascalientes, MEXICO
According to the Population Census conducted in Mexico, statistics indicate that in the city of Aguascalientes, Mex., 1.8% of the population has a disability
hearing, visual or language, which hampers his communication with others.The project goal is to develop a artificial vision software to identify and interpret the
message from the hand signs of a person. <br><br> <br><br>The project is based on the theory of "color additive" and requires: 1) A personal computer with a
web camera to capture the image of the signals of the hand, 2) A software program developed in Visual Basic, which is able to: a) Interpreting Messages from
hand to signals Castilian alphabet and vice versa, b) Provide educational tools to facilitate learning of hand signals. <br><br>This project builds on the following
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Hyphotesis: <br><br>H1 = The application of basic colorimetry RGB in a computer assisted program allows disabled the optical recognition of hand signals.
<br><br>H2 = The use of a computer assisted program facilitates the process of teaching and learning the hand signals. <br><br>The procedures
implemented are: Analysis, Design, System Development, Documentation, Experimentation and Analysis of results, which show that the percentage level of
Optical Recognition Castilian alphabet letters and digits is 85.8% and increases the ease of learning by 100%, therefore there is evidence to accept the
hypothesis H1, and Ha2.
Awards won at the 2010 ISEF
First award $1,500 - Synaptics, Inc.
2010 - CS013
HARDWARE AND SOFTWARE OF A SMART HOUSE
Dmitry A. Kabak
Polytechnical Gymnasium #6, Minsk, BELARUS
Modern houses don’t look today as they used to some decades ago: now they consist of several complex engineering and telecommunication systems.
Developers from all over the world are looking for methods to create ever more comfortable conditions for human being in their houses. Such solutions are
called "smart house" or "intelligent home". The scientific project presented is dedicated to developing such a device. It consists of two parts: a hardware
component and a software component.<br><br>In addition to the software, a device has been developed, which has the following functions: control of an
unlimited number of electrical devices (household appliances, lamps, security systems, etc.), control via Internet, possibility to work with remote control and in
offline mode, and password protection. It also has internal network with an ability to connect any number of additional external devices, such as motion sensors
and electronic key readers. Motion sensors can be used to automatically control some devices and in security subsystem, electronic key readers can be used
to control "high-security-level" functions. There can any additional device connected, because the network infrastructure is highly extendable. Device is
connected to computer network via Ethernet.
2010 - CS046
HARDWARE-SOFTWARE REHABILITATION GAMING SOLUTION FOR CEREBRAL PALSY CHILDREN - TRAINING SPACE
Bohdan Kachmar
Lviv Physics and Mathematics Lyceum, Lviv, UKRAINE
There are many different gaming devices in rehabilitation. Some of them are used for training of movements of children with Cerebral Palsy. Thus a problem of
creating a technology that can connect gaming hardware with software, regardless of connection type and game programming technologies arises.<br>
<br>Procedure, data<br><br> To split the work effectively between the developers of computer games and engineers who create new input devices, they must
use some standardized interface. It has to include the state of the joystick and control keys, personal player’s information. Creating a separate server
application with the mentioned information allows applying it regardless of the game technology. The special issue is Flash applications, which do not have
access to gaming manipulators, but provides powerful media tools for creating games. The use of such network protocol as HTTP is an effective method to
connect game devices and amazing games. This project takes all the work on communication with devices and provides a simple, fast and standardized
interface for information exchange to gaming applications. The game should only submit a request to the dedicated HTTP server and obtain all necessary
information about the player and the state of gaming device. So, the developed technology is used for the rehabilitation game "Space Training".<br>
<br>Conclusions<br><br> A new interface between rehabilitation games and Gaming Devices was developed. It is a new method of logical connection of
games and manipulators. It substantially simplifies development of new rehabilitation software and I hope it will boost R&D in this area and help to return
children with Cerebral Palsy to normal life.<br><br> The benefits of proposed technology are demonstrated in the game "Space Training".
2010 - CS026
THE ANALYSIS OF THE LEVER MECHANISM WITH COMPUTER APPLICATION
Raushan Askhatkuzy Kengesbayeva
High-School # 176, Almaty, KAZAKHSTAN
Work is aimed to investigate the structure, kinematics and cvasistatic of the plainly-lever mechanism. The formula of mechanism structure is revealed through
using of methods and ways of structural anakysis. The equations of directions composed by method of graph, trajectories, speeds of points and angular
velocities of mechanism links defined by graphics and numerical methods for any moment position is carry out with account of external as fixed as variables on
time.
2010 - CS018
DETERMINISTIC LEXICAL CATEGORIZATION USING GENETIC ALGORITHMS
Dru Harrington Knox
Roanoke Valley Governor's School for Science and Technology, Roanoke, VA
Current lexical categorization programs generally trade speed and resource use for higher accuracy. This study explored a novel approach to lexical
categorization which increased speed and reduced disk footprint with a minimal decrease in accuracy. The program created for the study, codenamed Genesis,
has implications in artificial intelligence, code breaking, and many other fields. <br><br>Genesis solves basic natural language processing problems using
genetic algorithms. The program generates random taggings for a given sentence and refines those taggings using a method similar to evolution.<br><br>To
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
assess the use of genetic algorithms for lexical categorization, Genesis was compared to a well-established categorization program using three benchmarks.
The first benchmark was a Big O analysis of algorithmic growth. The second was a comparison of file size between the two programs. Finally, a test corpus of
annotated English text was run through both programs, with the output being analyzed for accuracy. <br><br>The algorithmic growth rate for Genesis was
cubic, while the growth rate for the comparison program was exponential. Furthermore, the disk footprint of Genesis (200 kB) was 0.02% of the disk footprint for
the comparison program (1GB). Accuracy for Genesis was 93-95% versus 97-98% for the comparison program.
Awards won at the 2010 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel
2010 - CS050
XOR ENCRYPTION REVISITED
Carl Robert Kolbo
Lakota High School, Lakota, ND
my experiment on XOR encryption. It is encryption based on using some special properties of the exclusive-OR bitwise operator (^ in C). <br><br>For my exper
iment, I tested how each of the three methods of XOR encryption I developed stand up to attacks other than brute force. I developed three programs for crack
ing each method of encryption. <br><br>I measured the amount of time that it took for each method to be cracked and recorded it. The repeat method of
encryption took the longest amount of time to crack, so it is the most secure method against attacks other than brute force.
2010 - CS303
THE CLASSIFICATION AND RECOGNITION OF EMOTIONS IN PRERECORDED SPEECH
Akash Krishnan, Matthew Fernandez,
Oregon Episcopal School, Portland, OR
Using Matlab and a German emotional speech database with 534 files and seven emotions (anxiety, disgust, happiness, boredom, neutral, sadness, and
anger), we developed, trained, and tested a classification engine to determine emotions from an input signal. Emotion recognition has applications in security,
gaming, user-computer interactions, lie-detection, and enhancing synthesized speech. After our speech isolation algorithm and normalization was applied, 57
features were extracted, consisting of the minimum, mean, and maximum values of fundamental frequency, first three formant frequencies, log energy, average
magnitude difference, 13 Mel-frequency cepstral coefficients (MFCC), and its first and second derivatives. The MFCC data, resorted from minimum to
maximum, resembled a tangent function, so we developed a program to determine the optimal values of a and b in the tangent equation: f(x)=a*tan((pi/b)(x500)). Clusters of the first 18 features were grouped and, in conjunction with a weighting system, were used to train and classify features of every emotion. In
addition, an MFCC input feature matrix was compared against each emotion’s MFCC feature matrix with another weighting system that gives importance to
dissimilarity among emotions. Overall, our program was 77% accurate, only 3% worse than an average person who identifies emotions with 80% accuracy.
Anxiety was 99% accurate, sadness had zero correlation with anger, and with neutral removed from the results our accuracy increased to 84%, implying that
neutral is in the middle of emotional spectrum. Future work will involve comparing the results of human subjects to our program’s results, and training our
program with new speech databases.
Awards won at the 2010 ISEF
Trip to the EU Contest. - European Union Contest for Young Scientists
Team Second Award of $400 for each team member - IEEE Computer Society
Intel ISEF Best of Category Award of $5,000 for Top First Place Winners - Team Projects - Presented by Intel
Genius Scholarships - Sierra Nevada College
Second Award $1,000 - Symantec Corporation
2010 - CS302
MAKING CHEMISTRY EASIER WITH GENII
Gideon Christiaan Kruger, Werner van Zyl,
Duineveld High School, Upington, Northern Cape, SOUTH AFRICA
In our modern world Chemistry serves as a crucial school subject, but students have developed a negative attitude towards it. Our purpose is to design
computer software to aid as a study companion to the Periodic Table for students up to Grade 10, in effect making studying easier and more fun. Our
hypothesis is that the software will help students with learning Chemistry.<br><br>Our research showed that 90% of students need such an aid. To design the
software we did the following 1) Released a questionnaire to investigate the need for our software 2) Tabulated the results 3) Developed and debugged the
software 4) Handed out the Pre-Release Beta Versions for further testing 5) Finalized the software to a complete edition. <br><br>The software was previewed
by teachers and students who showed great interest and enthusiasm. 18 out of 20 Grade 10 students felt that using the study aid will improve their marks. This
substantiates its pressing need.<br><br>We successfully designed, coded and debugged the software containing basic knowledge about the Periodic Table
(i.e. masses and bonds). The study aid can be expanded in the future, for example by setting more questions for the interactive test. It can be used by every
Chemistry student. 90% of students say that the software will help them, confirming our hypothesis.
Awards won at the 2010 ISEF
Fourth Award of $500 - Team Projects - Presented by Intel
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2010 - CS034
ACCURATE PREDICTION AND TRACKING OF LUNG CANCER
Vedant S Kumar
duPont Manual High School, Louisville, KY
Lung cancer is a highly pervasive, destructive, and difficult to diagnose disease. It is characterized by metastatic pulmonary nodules, and radiologists often use
3D scanning technology to find these growths. The massive size, redundancy, and noisiness of 3D data sets (volumes); the obscurity of pulmonary nodules;
and human error contribute to lung cancer's startlingly low survival rate. The research goal was to devise a practical diagnostic program capable of
automatically locating, analyzing, and tracking pulmonary nodules across several volumes. It is expected that the use of novel segmentation techniques will
minimize false positives and negatives and result in a low error rate.<br><br>The program first eliminates background CT noise by using a median filter and
the region-growing algorithm. Expectation-maximization is used to optimize a histogram of voxel intensities. The resulting linear combination of discrete
Gaussians represents separate lung components. This combination is exploited to extract the lungs into a new binary volume. Region-growing is performed to
mitigate the effect of any potential classification errors. Expectation-maximization is repeated on the new volume to extract the vasculature, which is partitioned
into contiguous objects and then thoroughly analyzed. Analysis entails a continuum of shape detection and shade matching checks.<br><br>A T-test with 95%
confidence shows that the true mean error rate of the algorithm lies between 1.70% and 1.94% for malignant data sets (no false negatives), and only two false
positives were encountered. The completed C++ implementation is highly accurate, efficient, portable, and significantly eases lung cancer diagnosis by
generating interactive visualizations.
Awards won at the 2010 ISEF
Second Award of $500 - American Statistical Association
All expense paid trip to tour CERN - European Organization for Nuclear Research-CERN
Second Award of $1,500 - Computer Science - Presented by Intel
2010 - CS304
S.T.E.V.E. - ARTIFICIAL INTELLIGENCE - AND TIC TAC TOE
Andrew Ryan Larsson, Daniel Scott James,
American Leadership Academy, Spanish Fork, UT
For our project, we have created a robot that learns how to win at the game of Tic Tac Toe. We accomplished this through process know as reinforcement
learning. Basically, for each unique board layout, the robot has a certain chance for picking each valid move. As the robot plays a game, it picks a move based
on the stored chances. If the robots wins the chance of making the same moves are increased. Conversely, if the robot loses, the chance of making those
moves are decreased, thus biasing his ability to make beneficial moves.<br><br>We created a program in C++ that manages the Tic Tac Toe games between
the robot and several other AI's. We also composed a program in NXC that runs on the robot and controls the machine's movements. The two programs
communicate through Bluetooth and allow the user to play a game with the robot, where the C++ program preforms the calculations that allow the robot to
learn.<br><br>Our hypothesis is: After proper training of our robot, it will learn to lose less than 20% of played games. To test our prediction, we created
several other AI's to play against our robot. After each increment of one-thousand games, we recorded the percent of wins, losses, and ties. Our graphs and
data were undoubtedly conclusive and overly supportive of our hypothesis. This project has been successful in proving our hypothesis and research; it
represents the ideas associated with artificial intelligence and simulative learning.
2010 - CS033
INTEGRATION OF DEVICES THROUGH THE VIRTUALIZATION IN THE WEB
Dongjun Lee
Korea Digital Media High School, Ansan-si, Gyeonggi-do, SOUTH KOREA
Currently, the devices that enable networking each have different characteristics. The differences in the characteristics of these devices make processing or
sharing of data difficult. If we were to virtualize into one view these devices that enable networking, we can create greater value by linking them. Therefore, we
need a virtual place where we can collect all the information divided into the many devices. And we need a prototype that enables communication among each
other. I believe that in the future, people will pursue virtualization and cloud computing. And we must also think of ways to overcome the limitation of platforms
in tandem with the emergence of devices with many characteristics. Therefore, I created and researched Web as a virtual space that has no limitation of a
platform. <br><br><br>The model of this study is divided into three parts of device, device virtualization engine and client in order to explain. By device I mean
a device that enables networking. To the devices, we provide data and agents that enable approaching the api. By communicating directly with the device
virtualization engine, we provide the device virtualized information to clients. In other words, it is virtualization service viewer. Based on the aforementioned
model, I have presented specific examples to realize it. Based on various virtual scenarios, I will prove that my study is useful.
2010 - CS040
CONTINUAL ADAPTATION OF ACOUSTIC MODELS FOR DOMAIN-SPECIFIC SPEECH RECOGNITION
David C. Liu
Lynbrook High School, San Jose, CA
Advances in automatic speech understanding bring a new paradigm of natural interaction with computers. The Web-Accessible Multi-Modal Interface (WAMI)
system provides a speech recognition service to a range of lightweight applications for Web browsers and cell phones.<br><br> However, WAMI currently has
two problems. First, to improve performance, it requires continual human intervention through expert tuning—an impractical endeavor for a large shared speech
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
recognition system serving many applications. Second, WAMI is limited by its global set of models, suboptimal for its unrelated applications.<br><br> In this
research I developed a method to automatically adapt acoustic models and improve performance. The system automatically produces a training set from the
utterances recognized with high confidence in the application context. I implemented this adaptive system and tested its performance using a data set of
106,663 utterances collected over one month from a voice-controlled game. The utterance error rate decreased 13.8% by training with an adaptation set of
32,500 automatically selected utterances, and the trend suggests that accuracy will continue to improve with more usage.<br><br> To solve the second
problem, I extended the WAMI system to create separate models for each application. The system can now adapt to domain-specific features such as specific
vocabularies, user demographics, and recording conditions. It also allows recognition domains to be defined based on any criteria, including gender, age group,
or geographic location.<br><br> These improvements to WAMI bring it one step closer towards being an “organic,” automatically-learning system.
Awards won at the 2010 ISEF
Second Award of $500, in addition the student's school will be awarded $200, and the student's mentor will be awarded $100 - Acoustical Society of America
Fourth Award of $500 - Computer Science - Presented by Intel
2010 - CS306
MODEL-DRIVEN CONFIGURATION OF AUTONOMATED PARKING FACILITIES
Jemale Donta Lockett, Omar Ismail,
Alabama School of Fine Arts, Birmingham, AL
This project represents an investigation into the customization of an environment that supports a fleet of autonomous vehicles cooperating to solve a common
task. Specifically, the project is scoped within the context of an automated parking facility, whereby a driver may drop a car off at the entrance to a garage,
receive a reservation code, and leave the car behind. The vehicle will then be instructed on how to drive itself to an open parking space. The owner of the car
may return to the facility to retrieve their vehicle, which will be autonomously returned to the owner amid a set of other vehicles that may be entering and
leaving the facility concurrently. Determination of parking location and maneuvering to and from the space is coordinated between each vehicle and a host
controller at the facility, which can be configured for specific parking lots using a high-level modeling language. The controller handles all communication
between vehicles and provides instructions to each car regarding the directions to the assigned parking space. For the purposes of this project, small robots
were used to simulate cars and Bluetooth was the wireless communications medium between the cars and host controller.
2010 - CS005
THE UNIVERSAL TRANSLATOR: REAL TIME VOICE TO VOICE COMPUTER ASSISTED LANGUAGE TRANSLATION
Connor Austin Metz
Dunbar High School, Fort Myers, FL
tly conversation between people who speak different languages requires the assistance of a human translator. Some of the challenges in using human trans
lators include privacy issues, expense and availability. Accurate text to text machine translation is common but there is no readily available program to trans
late spoken conversations in real time. This project offers a new option by combining off the shelf programs with a macro to provide this needed service. <br>
<br> After exploring several different techniques, a three step process was adopted. The initial step consists of turning spoken voice into text by utilizing Dragon
NaturallySpeaking®, which has the advantage of being available in multiple languages. Then the text is inserted into LEC Power Translator® from Dragon
NaturallySpeaking® to be translated. In the last step, a macro was created to insert the translated text into TextAloud® to be vocalized in the second language.
These programs were first evaluated individually to determine suitability for the project. The performance of the programs and the accuracy of the translation
were tested. The three programs were then combined with the macro to create a complete and functional product. <br><br>This project successfully performs
economical real time verbal translation and has substantial military, business, medical and humanitarian applications.
2010 - CS016
SUDOKU...IT'S ALL ABOUT THE NUMBERS
Christy Nicole Munson
Ridgway Christian School, Pine Bluff, AR
The purpose of this experiment is to try to make a computer program that will check sudoku puzzles using Visual Basic software. My research question was:
Can I make a computer program to check a sudoku puzzle? My hypothesis is that I can make a computer program that is a sudoku puzzle checker. My null
hypothesis is that I cannot make a sudoku puzzle-checking program. <br><br>For this experiment a program was made to check the answers on a standard
nine by nine sudoku puzzle. This program checked and made sure that the puzzle was right by making sure that each number one through nine is in each row,
each column, and each three by three box only one time.<br><br>The program did work. It can check and see if a sudoku solution is correct and tell if there is
a mistake or not.
2010 - CS022
NEW MORPHOLOGICAL FEATURES FOR AUTOMATED CLASSIFICATION OF GALAXY IMAGES OBTAINED IN DEEP SPACE SURVEYS
Andrei V. Nagornyi
Stuyvesant High School, New York, NY
Automated classification of galaxy images is a current problem for which a solution is of the utmost importance. The volume of galaxy images being received
daily by astronomers by far exceeds any feasible amount that can be classified manually. This work sets out to contribute to the ongoing search for successful
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
techniques of galaxy classification. Two new image transformation techniques for extraction of new morphological features are developed and presented. The
two transformations, the Sand - Groove and the Tousle - Comb transforms, are combined into a unified testing program which is implemented with a test set of
63 galaxy images. Statistical analysis of the resulting data reveals correlated trends about different types of galaxies. Specifically, statistical data regarding
spiral galaxies allows to not only classify the galaxy as a spiral, but also determines the direction of rotation of the galaxy. Irregular and elliptical galaxies also
have unique statistical trends. After thorough testing, the two transformations are shown to be successful tools in aiding classification of galaxy images. Not
only do they provide accurate and revealing data that can be implemented with machine learning algorithms, but they also create the ability to determine the
direction of rotation of galaxy images. This is something no previous classification algorithm has accomplished.
Awards won at the 2010 ISEF
Priscilla and Bart Bok First Award of $1,000 - Astronomical Society of the Pacific and the American Astronomical Society
Third Award of $1,000 - Computer Science - Presented by Intel
2010 - CS032
NOVEL COMPUTER CONTROLLING WIRELESS DEVICE FOR HANDICAPPED PEOPLE
Ganindu Nanayakkara
Ananda College, Colombo - 10, Western, SRI LANKA
Computers play a major role in the modern world. Devices are getting complicated daily. But unlike ordinary users, physically disabled group of people looses
the ability of experiencing benefits of the modern technology.<br><br>Preliminary research was carried out, in-order to analyze characteristics such as
simplicity, reliability, customizability and affordability of ICT based products available in the market, specifically designed for handicaps. As a result, I figured out
that their qualities are not adequate enough to satisfy requirements of such users. Therefore, invention of a computer controlling tool with all the above qualities
was considered as a necessity.<br><br>The developed product is an interplay of hardware and software, which controls an entire computer system, depending
only on 4 input commands. Its driver software contains all the basic features a user expects from a PC. The method of providing user inputs is totally adjustable
depending on the user’s requirements. Also, it is extremely simple, customizable and affordable, so that any kind of a handicap can use and afford one.<br>
<br>This product is also responsible from the environmental point-of-view. Combination of a number of hardware and software based special features enables
the invention to stand as an environmentally friendly “green product”.<br><br>In conclusion, the developed product is outstanding under a number of sectors
such as functionality, economy, Eco-friendliness and simplicity. Therefore, it is ideal to be used not only by handicaps, but also by ordinary PC users; although
its design is particularly focused on the former party.
Awards won at the 2010 ISEF
Trip to attend the China Adolescents Science and Technology Innovation Contest in August. - China Association for Science and Technology (CAST)
First Award of $3,000 - Computer Science - Presented by Intel
Award of $1,000 - National Collegiate Inventors and Innovators Alliance/The Lemelson Foundation
Second Award $1,000 - Symantec Corporation
2010 - CS024
DETECTION OF PROSTATE CANCER USING IMAGE ANALYSIS
Saad Syed Nasser
Northside College Preparatory High School, Chicago, IL
Prostate cancer diagnostic rate can be increased if suspicious locations can be identified on the prostate before the needle biopsy. Magnetic Resonance
Imaging (MRI) can be utilized to detect cancerous locations on the prostate and this can ultimately reduce the number of blind needle biopsies required for
prostate cancer detection. To determine the effectiveness of MRI for prostate cancer detection, texture features were quantitatively extracted from MRI images
at specific time points after contrast is injected. Analysis of our results proved that correlations between texture features and Gleason scores were very low
when texture features were extracted from the entire prostate MRI image. Suspect regions of interest (ROIs) were then selected from the same prostate MRI
images for texture feature extraction. This data correlated much higher with Gleason score and nearly all correlation coefficients were statistically significant at
the .05 and .01 alpha levels. <br><br> The purpose of computerized image analysis is to distinguish between benign and malignant prostate tissue by
automatic segmentation based on differentiating characteristics. The considerable variation in Gleason score grading between pathologists prompts the need
for an automatic segmentation tool that can aid pathologists in their grading and diagnosis of prostate cancer. To gauge the effectiveness of computerized
segmentation, it will have to be compared to manual segmentation. The results show that computerized segmentation does not compare favorably to manual
segmentation done by a pathologist. Independently, the results show that when using median gland size as the discriminating factor the computer can
accurately distinguish between benign and malignant tissue 93% of the time.
Awards won at the 2010 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel
2010 - CS036
A SUPER-ENCRYPTION STANDARD FOR LARGE DATA USING ELEMENTARY CHAOTIC CELLULAR AUTOMATA
Akshay Nathan
Lynbrook High School, San Jose, CA
Cellular Automata (CA) are arrays of bits that evolve according to a specific rule. Some automata exhibit chaotic and random behavior which indicates that they
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
have potential for encryption. Many other attempts at building an encryption system based on CA have been vulnerable to certain types of attacks. The goal of
this project was to create and implement a novel encryption scheme based on cellular automata, and to evaluate its randomness, efficiency, and strength.<br>
<br>The resulting algorithm takes an input of a 3-part key and a plaintext. A unique aspect of this scheme is that the plaintext itself is run through a CA and
decrypted through a designed inverting algorithm. The final ciphertext can only be broken upon knowing all 3 parts of the key.<br><br>Each preliminary
algorithm was implemented in C and tested using government recommended statistical tests. The final algorithm passed all of the tests multiple times, and
exhibited better randomness qualities than some supposedly “true” random number generators. The algorithm was also timed, and growth analysis showed that
with optimization, the scheme would be as fast as or faster than industry standard stream ciphers such as RC4.<br><br>By using super encryption through a
repeated sub-algorithm and by using a larger key, the scheme bypassed many of the attacks that are used against stream ciphers today. Although they display
very complex behavior, cellular automata operations are very simple, and can be easily integrated into hardware. Additionally, this stream cipher is extremely
conducive to parallel processing, making it ready for future computers. The results of this project demonstrate the practicality of cellular automata based stream
ciphers by presenting a simple but elegant prototype that is secure and efficient.
Awards won at the 2010 ISEF
Agilent offers paid summer internships at an Agilent site that aligns with the student and his/her background. - Agilent Technologies
Third Award of $250 - American Statistical Association
Fourth Award of $500 - Computer Science - Presented by Intel
2010 - CS023
PROBABILISTIC COMPUTATION OF PI USING PARALLEL PROCESSING ON THE GRAPHICS PROCESSING UNIT
Alexandra Marie Porter
La Cueva High School, Albuquerque, NM
The purpose of this project is to examine the speed-up attainable by using parallel processing on a multi-core Graphics Processing Unit (GPU) compared to
serial processing on the Central Processing Unit (CPU) of a personal computer. GPUs have great potential for fast processing because they contain hundreds
of threads which can be used simultaneously for simple floating point calculations.<br><br> Two approximations of pi are written in C++ using the CUDA toolkit
for GPU execution. Both use geometric probability with a random number generator. The first approximation uses four simple arithmetic operations. The
second approximation is more expensive, using six operations and a sine function. Because the second approximation converged to a value slightly different
from pi, the accuracy of the sine function was investigated using power series approximations of varying degree for sine. These applications are well suited to
investigating speed-up with parallel processing because they are highly parallelizable, requiring no communication between the processors during the
calculations.<br><br> Each method of approximation was run on the GPU using parallel processing and on the CPU serially for comparison. The GPU proved
to be faster for larger sample sizes. At smaller sample sizes the CPU was more efficient due to the GPU set up and data copying time and to faster CPU clock
speed. The sample size at which the GPU becomes faster than the CPU is smaller for the approximation containing more expensive operations that are
parallelized. When the amount of parallelization was increased, execution times decreased proportionally.
2010 - CS011
X-FINDER: THE ELECTRONIC GUARDIAN ANGEL
Maximilian Lukas Reif
Justus-von-Liebig-Gymnasium Neusab, Neusaess, Bavaria, GERMANY
My grandfather suffers from Alzheimer’s disease. He has lost his sense of orientation and easily gets lost. My grandmother cares for him. She is scared to lose
her husband in an unattended moment when leaving the house with him, especially in crowds. In such cases the electronic patient tracking system, X-Finder,
can be extremely helpful like a guardian angel. If my grandfather gets lost, the X-Finder locates him immediately and reliably using a mobile phone. It makes
my grandmother more confident and they can be more active together. Worldwide 35 million people suffer from dementia. 5.3 million US-Americans have
Alzheimer’s disease.<br><br>The X-Finder is a software loaded on the mobile phone of the person to be supervised. He does not interact with the phone, the
search is automatic. The ‘Searcher’ is a software loaded on the supervisor’s mobile. When the patient gets lost, the supervisor sends an SMS to the X-Finder.
The X-Finder determines the patient’s position using GPS and sends this information back (SMS). The Searcher navigates the supervisor to the patient’s
position.<br><br>The challenge of the project was a structured state-of-the-art software design using UML for multi-threaded JAVA-MIDlets.<br>
<br>Compared to existing GPS trackers in the US, Europe and Asia the X-Finder does not need special-purpose devices, does not need an intermediate
tracking service provider, has very cheap costs per use, and has the advantage that several Searchers can participate in the search simultaneously.
Awards won at the 2010 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel
Genius Scholarships - Sierra Nevada College
2010 - CS008
THE EFFECT OF VPN COMPRESSION ON DATA THROUGHPUT
Alexandra Lee Rey
Hernando High School, Brooksville, FL
When downloading files, you’re actually transferring compressible data over a network. A VPN (Virtual Private Network) uses encryption to create a private
network across the internet. Encryption secures communications across an unsecure network. It’s used by people to connect from remote locations like their
home or a coffee shop to their business offices, so they can work remotely. I am comparing the time it takes for a file to be transferred through a VPN when the
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
VPN is using compression as well as when it’s not. Compression is a method of reducing the size of a file by locating patterns of data and replacing each
pattern with a smaller substitute.<br><br>If I compare the throughput of a VPN connection when using and not using compression then I will see improved
throughput when compression is used. The improvement will differ by file type because not all file types are capable of the same compression<br><br>To test
my hypothesis I installed a VPN client on my computer. I then connected the client to an IPSEC tunnel endpoint with VPN compression enabled. Next I logged
into an FTP server and transferred the 7 different file I was downloading to test. I logged the results of the tests. I repeated the tests 1000 times. I repeated this
process using with VPN compression disabled as well.<br><br>After reviewing my data I came to the conclusion that my hypothesis was correct. I did see
improvement using VPN compression though the compression achieved differed between file types.<br><br>This information will allow the creation of a code
to improve compression of data across a VPN, increasing efficiency in data communications. VPN compression can only compress the data contained in a
single packet. Perhaps code could be written to pre-compress the files automatically prior sending them through the VPN.
2010 - CS021
BEATHOVEN: IDENTIFYING AND INVENTING SOLUTIONS TO OBSTACLES HINDERING AUTOMATIC TRANSCRIPTION OF POLYPHONIC MUSIC OF
A SINGLE INSTRUMENT
Vighnesh Leonardo Shiv
Catlin Gabel School, Portland, OR
Automatic music transcription, the computerized translation of digital music to sheet music, has long remained an unsolved problem. Attempted solutions have
either made unreasonable assumptions or been based on heuristics not generally applicable. The purpose of this research was to develop a mathematically
rigorous application to solve automatic transcription of polyphonic music of a single instrument.<br><br>We created a test bed of music, climbing from notes to
chords to full musical pieces, and tested the accuracy of a variety of algorithms, both original and established, on these music files. We are now working on
finalizing and optimizing those algorithms that appeared most theoretically and practically sound.<br><br>Myriad obstacles to automatic music transcription
exist, of which the most significant are frequency detection, overtone elimination, and phantom fundamental construction.<br><br>Frequencies present must
be detected, as frequencies correspond to musical notes. Current frequency detection algorithms descend from the Fourier transform, subject to the Fourier
Uncertainty Principle: they cannot accurately detect the frequencies of short notes. We are developing a promising solution to frequency detection by
constructing a multidimensional convex polytope using a modified phase-I Simplex algorithm.<br><br>Most musical notes have both fundamental frequencies
and overtones. Overtones do not represent notes played and must be eliminated. Some notes have only overtones without a fundamental. According to
psychoacoustic theory, the ear will hear the “phantom” fundamental, so these missing frequencies must be constructed. We are exploring the relationship
between the phases of fundamentals and overtones as a method to identify overtones and phantom fundamentals.
Awards won at the 2010 ISEF
Fourth Award of $200 - Association for Computing Machinery
Third Award of $350 - IEEE Computer Society
Fourth Award of $500 - Computer Science - Presented by Intel
Award scholarship of $5,000 - Oregon Institute of Technology
UTC Stock with an approximate value of $2,000 - United Technologies Corporation
2010 - CS037
LOCAL LAYERING OF IMAGES WITH A NATURAL USER INTERFACE
Anna Kornfeld Simpson
Patrick Henry High School, San Diego, CA
Local layering is a new concept that allows users to "weave" images as if they are strips of paper or bring regions of interest on several different images to the
forefront simultaneously. In this project, I created a program to layer images locally with a minimum of actions done by the user. Conventional stack-based
image editing programs handle a relative depth ordering by compositing a series of images at different layering levels based on a global order. This provides no
support for local layering, reduces the possible image complexity, and limits the ability of the user to edit the image. <br><br>My program structures the
composite image into groups and provides a variety of methods for selecting these groups. To relayer, my program locates the nearest intersection of two
selected images and then uses integer vector manipulations to reorder the layers. All relayering is done in such a way that the composite remains physically
plausible, as if the images were sheets of paper that can be woven but not cut. This allows local editing of image layers with only one or two mouse clicks - a
significant improvement on both commercial image-editing programs and previous work on the subject. An application of my work is in the analysis of scans or
microscope slides in medicine or neuroscience, which would give doctors a more complete picture when making a diagnosis. Other applications include
compositions of satellite images for military intelligence and manipulations of images in entertainment and engineering.
2010 - CS017
INTELLIGENT ROBOT
Monika Svedirohova
SPSE Pardubice, Pardubice, CZECH REPUBLIC
There are situations which can’t be directly solved by humans due to danger, discomfort – e.g natural or industrial disaster. It is therefore more suitable to send
a robot, which can access also get to dangerous areas. The robot can be autonomous or teleoperated. That’s why robot needs robust wireless control – using
PC or cellphone. It also includes basic autonomy using several sensors and mechanical arm.<br><br>At the start of my project I planned all the functions,
degrees of autonomy and methods of control. Then I had to design electrical systems, make printed circuit boards, assemble the boards, write a program and
then test it over and over again to ensure correct operation. Then I had to correct errors and create mechanical construction.<br><br>The first result of my
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
work was a remote-controlled cybernetic vehicle. My robot was capable of decision-making based on signals from infrared sensors. With robotic gripper I can
collect objects. There is also a liquid crystal display, where we can see current instructions and obstacle warnings. Everything is controlled by MCU Atmega32
from AVR 8-bit family. Due to communication over bluetooth, the vehicle can be controlled by a PC as well as cellphone.<br><br>Then, I added PC on top of
the robot, and implemented methods based on soft artificial intelligence such as object recognition and data fusion. These features required much more
powerful computer than 8-bit processor, but carying notebook would be too heavy, that’s why I decided to use netbook.
2010 - CS027
TRANSLITERATION FROM KIRILL TO THE LATIN, ARABIC ALPHABET AND BACK TO FRONT TRANSLITERATION FROM LATIN ALPHABET TO THE
KIRILL ALPHABET
Almas Tilegenov
Republican Specialized in Physics and Math, Almaty, Almaty, KAZAKHSTAN
the aim of this work is to find decisions in order not to deprave kazakh nation of continuity,to allow to integrate spaces of Latin in an information technology, to
allow to develop information technology in future.
2010 - CS020
FINDING EFFICIENT SHELLSORT SEQUENCES USING GENETIC ALGORITHMS
Kevin David Wang
Wylie E. Groves High School, Beverly Hills, MI
Shellsort, discovered by D. L. Shell in 1959, is a sorting algorithm whose performance depends on the increment sequence chosen. Even though many
attempts have been made to find an optimal sequence to allow Shellsort to reach the lower bound of O(nlogn) for comparison sorts, no such sequence has
been discovered. This paper presents a method to find efficient increment sequences through the use of genetic algorithms and compares the performance of
Shellsort with these sequences to merge sort and Shellsort with other known remarkable sequences. It is concluded that the sequences found through genetic
algorithms perform exceptionally compared to merge sort and Shellsort with other sequences even though they do not reach O(nlogn) performance.
2010 - CS052
COOPERATION AND PUNISHMENT: A LOOK AT THE GENERAL PHENOMENON OF RETRIBUTION THROUGH EVOLVED STRATEGIES FOR A
MODIFIED PRISONER'S DILEMMA
Eli Nathan Weinstein
Flintridge Preparatory School, La Canada Flintridge, CA
The model of the Iterated Prisoner’s Dilemma has been studied closely by many researchers in how it applies to the development of cooperation. In sociology
this has been extended to how punishment - essentially hurting your opponent at a smaller cost to yourself - works to maintain cooperation. This project takes a
more concrete approach to the subject, developing a cellular automata-based model of the iterated prisoner’s dilemma (with punishment) combined with an
evolutionary system with probability-based strategies. It was found that the environment varies too quickly and the method of creating strategies is not
sophisticated enough, thus preventing any complex strategies from developing or lasting successfully. Punishment was found to be used, but not in any well
defined way (like to convince others to cooperate). There is evidence that small strategies developed to exploit the random nature of the simulation, but this did
not allow any broad conclusions to be reached.
2010 - CS035
DOES PRACTICE MAKE PERFECT? THE ROLE OF TRAINING NEURAL NETWORKS
Brittany Michelle Wenger
The Out-Of-Door Academy, Sarasota, FL
Does practice really make perfect when applied to neural networks? Neural Networks operate by selecting the most successful option based on prior
experiences in a certain situation. This project explores the difference in learning levels between a soccer neural network trained in games versus a neural
network that was trained via scenarios, which emulate a practice type atmosphere, to determine which training mechanism is most beneficial. <br><br> This
project was developed from the existing soccer neural network. The program was enhanced to allow for the implementation of scenario based training. Ten
scenarios were defined to optimize the training experience. Twenty trials of scenario trained teams were compared to twenty trials of game trained teams. To
assure the results were statistically significant; a t-Test was conducted comparing both winning percentage and goal differential.<br><br> Out of forty trials,
eleven trials achieved nearly optimal learning capacity – eight trained via scenarios and three trained through games. The average goal differential and winning
percentage is better for the scenario trained teams and the results proved to be statistically significant at a 95% confidence level. Scenario based training is
more effective than game or simulation based training. The results confirm that the hypothesis was correct and that convention wisdom is effective. Especially
for those creating a medical neural network, I would recommend following the idiom “Practice Makes Perfect” when running simulations of the neural model
because you can never be too careful.
Awards won at the 2010 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel
Third Award $750 - Symantec Corporation
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2010 - CS001
ROBUST VIDEO TRACKING THROUGH MULTIPLE OCCLUSIONS
Thomas Frederick Wilkason
Mount de Sales Academy, Macon, GA
tracking is becoming an important topic in the fields of surveillance, factory automation, and information gathering, as it identifies and tracks a certain objec
t in a video stream based on its visual characteristics. The purpose of this project is to expand the field of target tracking by creating a computer vision progr
am that acquires and tracks objects in a real-time video sequence, both with and without occlusions. Template Matching, Histogram Back Projection, and the C
ontinuously Adaptable Mean-shift algorithms are used to locate an object within a video sequence, and multiple Kalman filters are used to track and predi
ct the location of the object as it moves around a scene. The program was tested by using a number of video sequences of a robot maneuvering around a track,
both with and without occlusions. The algorithm was designed to track the objects under various conditions, including recorded and live video. The program
measurements included the number of CAMShift convergence cycles with and without the Kalman filter, the error of the Kalman filter, the occlusion tracking
performance, and various process and measurement noise values for optimization. The results showed the tracker is most efficient with the Kalman filter, which
has an average error of one pixel. With the filter, the tracker efficiently predicts the location of the object through multiple occlusions and consistently reacquires
the target at the exact position where it exits the occlusion. This algorithm expands the abilities of artificial tracking by creating an efficient method of tracking
through occlusions.
Awards won at the 2010 ISEF
First Award of $1,000 - American Intellectual Property Law Association
Second Award of $1,500 - Computer Science - Presented by Intel
First Award of $3,000 - Air Force Research Laboratory on behalf of the United States Air Force
Tuition Scholarship Award of $4,000 for original research in an important Naval-relevant scientific area. Trip London International Youth Forum - Office of Naval
Research on behalf of the United States Navy and Marine Corps
2010 - CS047
AESALON: VISUALIZING DYNAMICALLY-ALLOCATED MEMORY IN REAL-TIME
Kent Andrew Williams-King
Argyll Centre, Calgary, Alberta, CANADA
The tool created in this project, Aesalon, is capable of visualizing the dynamically-allocated memory usage of a program in real-time on a POSIX-based
platform. Before Aesalon, no such tool was known to exist.<br><br>Aesalon is split into two separate programs, the program monitor and the GUI, which
communicate using a TCP network socket. The program monitor uses an overload library and ptrace to collect data, including block sizes and allocation and
release backtraces. The GUI uses a revisioned binary tree to store the allocation history of a process efficiently, and can visualize the collected data in two
ways, the more useful of which is a density visualization with time and address space axes. Both the monitor and the GUI are multi-threaded.<br><br>With
Aesalon, a user may interact with a monitored program and immediately see the resulting memory activity, unlike other tools that perform similar tasks, such as
MemView.<br><br>Memory leak tracking and optimization are two possible uses for Aesalon. These applications have been successfully demonstrated on a
variety of complex sample programs. Aesalon was used to remove several memory leaks; and to eliminate memory-related inefficiencies resulting in speed
increases of up to 50%.<br><br>Two versions of Aesalon were released over the Internet under an Open Source license for peer review, and a third version is
under development. Future expansions upon Aesalon include filters for allowing a finer degree of control over the visualizations produced, and a debugging
information parser to refine the data gathered.
2010 - CS006
OPTIMIZING DETERMINISTIC NUMERICAL PRIMALITY TESTING ALGORITHMS FOR MAXIMUM PERFORMANCE
Joseph Christopher Woodson
Home School, Tulsa, OK
The purpose of my experiment was to determine how varying degrees of number-sieve pruning affect the performance of a trial-division deterministic primality
testing algorithm. I created a highly optimized program in C# using Microsoft Visual Studio 2010 Beta 2. The program was made to take an input number and
find the 5 prime numbers following it. A varying number of counters pruned the set of divisors for each number as it was being tested. Each of the 5 primes
found served as a single trial; thus, I had 5 trials per run. All input numbers began with 8 or more 1’s in unsigned binary format. I tested one input number for
every even number of bits between 8 and 80 inclusive. Each of these numbers was tested once for every number of counters between 0 and 12. From 8 to 20
bits, no significant correlation was found between the number of counters and the program’s performance. From 22 to 64 bits, the tests utilizing 1 counter were
generally the fastest, with a downward trend in speed in both directions. Above 64 bits, no statistically significant trend is visible, although the test with 72 bits
showed a slight trend bottoming out at 4 counters. Removal of obvious outliers (mostly the first trial in each set) was necessary to observe the trend in tests
between 22 and 32 bits.
2010 - CS307
SYNCHRONOUS TANGIBLE AUGMENTED REALITY
Lai Xue, Darren Lim, Hyun Ki Lee
Chengdu International School, Chengdu, Sichuan, CHINA
As technology has inexorably advanced over the years, one trend has been particularly prominent: increased interaction between humans and machines. While
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
current Augmented Reality systems may allow enhanced interaction between humans and computers, their lack of tangibility cripple the potential that humanmachine interaction can achieve. In our project, we attempted to construct a Synchronous Tangible Augmented Reality (STAR) system that allowed the user to
tactually manipulate viewed graphics. By researching Augmented Reality techniques, and amalgamating components from different schools of thought, we
managed to successfully assemble a STAR system by using a processing unit, a Head Mounted Display (HMD), infrared-lit stylus, and multiple cameras for the
purposes of tracking and vision. In terms of software, we optimally allocated different programming languages for different tasks: a head-tracking application,
which was written in C, handled tracking of the HMD and the stylus, while a Flash-written 3D engine handled the rendering and overlay of computer graphics;
carefully crafted code allowed for the inter-process communication between these two modules. As well, we demonstrated that STAR is more than just a
theoretical prototype; it is the blueprint for a product that can allow for integration into daily life and more, including but not limited to architectural and fashion
design, enhanced collaboration and presentations, and amazingly realistic simulations. All in all, STAR has the potential to revolutionize the face of media
usage and the capacity to redefine the concept of physical space, a capacity only last seen in the advent of the Internet.
Awards won at the 2010 ISEF
Second Award of $1,500 - Team Projects - Presented by Intel
2010 - CS039
IT-SPACE : A PLANNER FOR ALLOCATING OBJECTS IN A SPACE USING META-HEURISTIC ALGORITHM
Jun Young Yun
Whimoon High School, Seoul, Seoul, SOUTH KOREA
It is important to maximize space utilization, particularly when we initiate the space allocation for objects such as shops in the building. However, it is hard to
find the best allocation solution because allocation constraints are so diverse. The purpose of this research is to find a solution for allocating objects in the
space by using a meta-heuristic algorithm.<br><br>I develope a prototype named ‘IT-SPACE’ which implements my ideas as follows: In the IT-SPACE, space
is formed with rectangular shaped rooms and paths. Object allocation objectives are given through user-specified constraints. Constraints are requirements of
each object, such as "size" and "distance". The IT-SPACE converts the composition of a space into the mathematical graph model and calculates the
"estimation scores" using given user-specified constraints. The estimation score is the acceptance level of each allocation solution. Then, IT-SPACE solves the
allocation problem in the view of "searching problem" by comparing estimation scores among possible solutions, and chooses the best solution. <br>
<br>Compared with other algorithms, our allocating algorithm is efficient and effective enough to be applied in practice. In addition, IT-SPACE has a simple
easy-to-use interface, facilitating its use for non-professional users.
2010 - CS019
A PARALLEL COMPUTATIONAL FRAMEWORK FOR SOLVING QUADRATIC ASSIGNMENT PROBLEMS EXACTLY
Michael Christopher Yurko
Detroit Catholic Central High School, Novi, MI
The Quadratic Assignment Problem (QAP) is a combinatorial optimization problem used to model a number of different engineering applications. Originally it
was the problem of optimally placing electronic components to minimize wire length. However, essentially the same problem occurs in backboard and circuit
wiring and testing, facility layout, urban planning, ergonomics, scheduling, and generally in location problems. Additionally, it is one of the the most
computationally difficult combinatorial problems known. For example, a recent solution of a problem of size thirty using a state-of-the-art solver took the
equivalent of 7 single-CPU years. The goal of this project was to create an open and easily extendable parallel framework for solving the QAP exactly. This
framework has shown good scalability to many cores. It experimentally has over 95% efficiency when run on a system with 24 cores. This framework is
designed to be modular to allow for the addition of different initial heuristics and lower bounds. The framework was tested with many heuristics including a new
gradient projection heuristic and a simulated annealing procedure. The framework was also tested with different lower bounds including the Gilmore-Lawler
bound (GLB). The GLB was computed using a custom implementation of the Kuhn-Munkres algorithm to solve the associated linear assignment problem
(LAP). The core backtracking solver uses the unique approach of only considering partial solutions rather than recursively solving sub-problems. This allows for
more efficient parallelization as inter-process communication is kept to a minimum.
Awards won at the 2010 ISEF
Third Award of $300 - Association for Computing Machinery
Second Award of $1,500 - Computer Science - Presented by Intel
Third Award $750 - Symantec Corporation
2010 - CS053
EFFECTS OF SACCADIC EYE MOVEMENT ON CONTINUOUS EEG
Brittany Alexa Zelch
Pine Crest School, Fort Lauderdale, FL
Electroencephalography, or EEG, is very useful in examining the patterns and activity of the brain. During EEG recordings, eye movement is recorded by
Electro-Oculogram, or EOG. The saccades in EOG can cause disruptions of the brain activity and causes the formation of artifacts in the EEG. Although there
are established methods that can be used to remove certain artifacts such as blinks and muscular movement, there are no techniques that can accurately
remove saccades from EEG data. The purpose of this study is to identify and model the amplitude and spatial distribution of EEG contamination and study its
relationship to the correlating EOG signal. Thus far, results have shown a specific trend that illustrates a direct correlation between EOG saccades and EEG
artifacts. In addition, a complete phase shift in EEG artifact over the range of horizontal saccades was also discovered. The next step in this study will be to
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
obtain a more thorough understanding of the saccadic movements and create a mathematical formula that can be applied to all continuous EEG’s to remove
saccades.
Awards won at the 2010 ISEF
First Award of $200 - Patent and Trademark Office Society
2010 - CS025
SELF-ORGANIZING BEHAVIOUR IN A SOCIAL NETWORK REPRESENTATION OF INFORMATION
Nelson Zhang
Shanghai American School, Shanghai, CHINA
The organization and processing of information is a common underlying problem in situations ranging from web browsing to research. Sources of information
such as books, academic papers, or the Internet are presented without any inherent structure beyond manually added hyperlinks or citations. A solution to this
problem could conceivably benefit any situation where users are confronted with large amounts of unorganized information.<br><br>The algorithm presented in
this project models the relationships between files as a social network. Files are mapped onto vertices of a graph and relevance converted into weighted edges.
The algorithm generates tags for each file and an adjacency matrix representing edges in the equivalent social network. From this network, well-connected
cliques of similar files are selected through three different centrality measures – degree centrality, closeness centrality, and clustering coefficient. The network
is rendered as a swarm of agents, each attracted to members of its own clique based on edge weights.<br><br>Additionally, an augmented reality platform was
developed to allow users to physically manipulate the swarm representation of information.<br><br>Clustering coefficient was found to be the most effective
centrality measure; the cliques generated by the algorithm show a high correlation with the concepts identified in the same sets of files by human subjects in a
questionnaire. The correlation also increases with file count, implying that this algorithm can be scaled to much larger applications than those tested in this
experiment. Furthermore, the swarm representation exhibits emergent flocking behaviour, which leads to an intuitive interface for visualizing and manipulating
information.
Awards won at the 2010 ISEF
Fourth Award of $200 - Association for Computing Machinery
HM - American Statistical Association
2011 - CS010
DRAGTOP
Alessandro Abati
Liceo Scientifico Niccolo Copernico, Prato, Prato, ITALY
DragTop, the software I have conceived, is an image-viewer application that offers a brand new range of interactivity and it is so distinctive because it takes a
new direction, unattempted thus far: DragTop is dynamic where others are fixed; it is natural and intuitive where others are complex; it is fresh and cool where
others are boring; it is expressly shaped for touch input and gestures where others fail to come up to technological expectations. The interface is minimal, no
menus, no buttons, no arrow keys needed, just photos. DragTop does not need any such things because it is conceived to be intuitive and direct. Users can
move, rotate or throw any picture they want, just using their fingers. Moreover, simple gestures are enough to collect and arrange groups of pictures in
advanced ways: nothing could be easier.<br><br>It was the sense of detachment I had felt time and again when viewing photos with traditional software that
pushed me to create my own application. So I chose the ActionScript3 programming language and I began to give a shape to my idea starting from scratch. In
order to create a virtual desk, on which photos could lean on, I had to write a physics engine able to compute photo speed and spin values, simulate friction and
detect collisions. But that was not enough, my idea developed further so as to bring in also various arrangement methods, full screen view, nice graphic effects
and lots more. This is what DragTop showcases.
Awards won at the 2011 ISEF
Third Award of $250 - Synaptics, Inc.
2011 - CS022
MEDIA RHYME
Jad Habib Aizarani
Canadian High School, Beirut, South, LEBANON
I've always felt the need to make web-surfing easier taking into consideration time, money and the preservation of the environment . I really believe that there
are many gap waiting to be filled on the internet . after two months of hard and sleepless nights, the resutlt finally aroused a one of a kind search engine I called
" JIT Download mania" then altered into "Media Rhyme" that I bought the domain www.mediarhyme.com<br><br>Media Rhyme is a search engine that
searches for any file on the internet (Audio, Video, E-books, ..)
2011 - CS046
THE DATA STRONGHOLD
Andrue Storm Anderson
Glen Rose High School, Malvern, AR
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
My project is about computer and network security. With this project I am going to develop a computer algorithm to improve the security of my computer
program from last year. This project is a continuation of "Worms, Horses, and the Backdoor?", and deals with creating a computer algorithm to detect arbitrary
data execution and prevent it. To begin my project I studied online the way a computer works dealing with the CPU and the memory 'stack'. I already knew
about intrusion attempts hackers use from my previous research, and combined the knowledge I gained from studying processes with my previous knowledge
to begin developing my algorithm. Through various methods like my mouseclick tracking algorithm and process identifying code method, I was able to prevent
all concurrent methods of arbitrary data execution, and I believe my project and research was successful.
Awards won at the 2011 ISEF
Scholarship Award of $15,000 per year, renewable annually - Florida Institute of Technology
2011 - CS025
CATASTROPHIC DISASTER RESPONSE AND RECOVERY LOGISTICS SYSTEM SIMULATION (C-DRRLSS)
Rohan Arepally
Stoney Creek High School, Rochester Hills, MI
Directed or digraphs graphs provide means to address network flow problems that consider flow capacities and flows in edges connecting the vertices and a
particularly important application is to maximize flow in the network. In graph theory, graph, G (V, E), consists of vertices, V, and edges, E. The flow, f, and
capacity, c, along an edge, u-v, are given, and maximum flow is obtained using augmenting path, using Ford-Fulkerson algorithm or Edmunds-Karp algorithms
that use depth-first search (DFS) and breadth-first searches (BFS) respectively. In this project, the basic Maximum Flow network problem was adapted to cater
to the unique conditions that pose real world challenges. For instance, weighted categories of flow “candidates” or elements that need to reach targeted
intermediate destinations under temporal constraints; and appropriate resource allocation to enable the flow are of specific interest and are addressed in the
paper. An algorithm was developed and a new metric, yield performance index, YPI, was proposed to adequately quantify and rank performance for different
paths to maximize positive outcomes for the problem set. A complex scenario of a nuclear explosion in a populated city was considered for this case study. The
object oriented programming language, JAVA, was utilized to formulate the nuclear blast explosion problem and to generate the input parameters and graph, G
(V, E) for the enhanced network flow algorithm to compute YPI values for maximizing flow, flow rate and yield. The computer program, Catastrophic Disaster
Response and Recovery Logistics System Simulation (C-DRRLSS), was developed based on the proposed algorithm which will enable sensitivity studies to
help plan and prepare for rescue operations in the event of a disaster.
2011 - CS005
RINGERALERT
William Barbaro
Carroll High School, Dayton, OH
The goal of this project was to design an application for the Apple iPhone® that would alert the user to disable the ringer on their iPhone at a determined
location or time. Testing was done to determine the accuracy, frequency, and battery power used by each of CLLocationManager's settings. This data was
used to devise a geofencing algorithm that would be both accurate and power efficient. The application was written in Objective-C. CLLocation’s significant
location change was used to monitor the user’s location while the application was inactive or in the background. A location, chosen and tagged by the user, was
used as the center of a 5000 meter radius. When a significant location change was detected within 10000 meters of a tagged location, GPS was activated and
set to kCLLocationAccuracyThreeKilometers. Once the 5000 meter region was entered, accuracy was increased to kCLLocationAccuracyHundredMeters.
When the user entered the 500 meter region, accuracy was increased to kCLLocationAccuracyBest. As the 100 meter region was entered, a
UILocalNotification was sent to alert the user of entry into their tagged location and the GPS terminated. The user also had the ability to schedule personal
reminder alerts as timed local notifications. A view was constructed which allowed the user to select buttons for days of the week, hours, minutes, and AM or
PM. NSDateFormatter used the tags of the selected buttons to create an NSDate that scheduled the notifications with a weekly repeat interval. The application
ran free of errors and met the defined need.
2011 - CS041
PREFIX-TREE BASED ANOMALY DETECTION IN CRITICAL SOFTWARE SYSTEMS
Favyen Bastani
Texas Academy of Mathematics and Science, Denton, TX
Software is central to the modern world, ubiquitously embedded in almost all systems surrounding us. Consequently, software failures can have devastating
effects. Anomaly detection methods have been investigated to detect software faults by analyzing execution traces collected at run time, but many anomaly
detection algorithms are inefficient. I propose a significant improvement in anomaly detection using a prefix-tree based approach. Prefix tree compresses the
set of execution sequences because execution often begins along a shared path before branching off when input differs. Operating on this compressed prefix
tree can significantly decrease processing time. Second, prefix tree can be used as a heuristic to efficiently compute the set of K-nearest neighbors needed in
many anomaly detection algorithms. The heuristic assumes that nodes with a longer shared prefix path are more likely to be nearest neighbors. I integrate
prefix tree with a clustering based algorithm (MOGA-HC) and a density based algorithm (LOF), and evaluate the effectiveness of the novel approach using a
dataset from Cisco's VoIP CallManager system, which includes 197,628 execution traces. The metrics for evaluation are execution time, precision, and recall.
Results show that the prefix tree based MOGA-HC approach is forty times faster than the standard MOGA-HC, taking 51.2 minutes to find anomalies. Prefix
tree based LOF shows an even greater improvement over standard LOF: the search time is reduced from days to seconds (over a 77000 fold increase in
speed). This improvement in execution time is critical as it allows for on-the-fly fault detection and recovery in critical software systems, avoiding costly
damages.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Awards won at the 2011 ISEF
Full tuition scholarship - Drexel University
Scholarship Award of $15,000 per year, renewable annually - Florida Institute of Technology
2011 - CS302
THE APPLICATION OF C++ AS A METHOD FOR THE SYNTHESIS OF A SUPERIOR ENCRYPTION PROGRAM
William Marcus Bauer, Christian LeRoy Wood,
Cloquet Senior High School, Cloquet, MN
C++, an object oriented programming language, based on the C programming language, is credited for allowing low-level access to the hardware while also
including high-level features (HiTMilL, 2009). “Cryptography is the science of writing in secret code and…the first documented use of cryptography in writing
dates back to circa 1900 B.C” (Kessler, 1998, ¶5). With the advent of digital communications, cryptography has flourished due to the need for encryption of
transmissions over any untrusted medium (Kessler, 1998). The goal in encrypting data is to change a message, the plaintext, into “an intermediate form,” called
ciphertext. In ciphertext, the original data is present, but hidden, allowing for the release of the data without revealing the message it represents (Ritter, 2006).
The goals of this project were to create an encryption program that would be more effective in ensuring the security of encrypted information and to use C++
due to its efficiency and adaptability across multiple platforms. A program was written in C++ with the purpose of editing file binary data with a method similar to
an often used commercial encryption program. Using a key file in conjunction with a plaintext file, a time, and a time key, the program performs a set of
encryptions employing an XOR (exclusive or) function similar to the commercial program. In decrypting a file, the program uses the time from encryption in
conjunction with the original key to decrypt the file with the XOR function in reverse of the encryption method. Both research goals were successful.
2011 - CS007
ANALYZING THE FUEL CONSUMPTION AND ENVIRONMENTAL EFFECTIVENESS OF ADDING LANES TO ROADS AND HIGHWAYS: EXAMINATION
VIA COMPUTER SIMULATION
Gregory Shika Bekher
Chesapeake Science Point Public Charter School, Hanover, MD
Increasing reliance on fossil fuels from other countries increases the deficit, spoils the environment, and degrades our national security. How do we reduce our
dependency on foreign oil? Reduce traffic. The question is, does adding another lane to a highway save fuel by driving at efficient speeds? The hypothesis is
there will be significant improvement, however cost issues at a certain number of lanes will occur. The most reasonable approach to testing a hypothesis such
as this one is by computer simulation. The algorithm developed and implemented in C calculates the future speed, and a possible lane switch for each car
individually based on the distance from one car to the next; it can dynamically adapt to any number of cars, lanes, speed limit, distance between exits, etc. This
algorithm allows for many practical uses, including traffic pattern prediction. The program outputs the average speed, total distance traveled, and cars on the
road after the end of the fifteen-minute simulation period. Only the amount of lanes were changed for a controlled experiment, and were tested for one, two,
three, four, and five lane roads with six repeated trials for each number of lanes, all on a twenty-mile virtual highway. Switching from one to two lanes yields
substantial fuel and cost improvement of up to fifteen thousand liters saved per hour. Switching from two to three lanes yields some improvement, and adding
any more lanes will yield marginal improvement while not being as effective. In conclusion, traffic on overused commuter highways with two or fewer lanes
causes increasing fuel consumption problems and should be widened.
2011 - CS051
AUTOMATED EVOLUTIONARY MODELING WITH STATE-CONDITIONED L-SYSTEMS: USING FITNESS FUNCTIONS TO PRODUCE PLANT MODELING
L-SYSTEMS
Joseph Don Berleant
Little Rock Central High School, Little Rock, AR
roject's objective was to evaluate the effectiveness of using an automated fitness function to quickly and objectively judge how well a state-conditioned L-system
modeled a given plant's structure and produce a state-conditioned L-system that modeled that plant. It was hypothesized that such an L-system could be pr
oduced through repeated mutations and selections based on the fitness function. Additionally, because the state-conditioned L-system is more general and f
lexible than standard L-systems, it was hoped that this new type of L-system would be more capable of such plant-modeling.<br><br> To test the funct
ionality of the fitness function idea in this context, a Java program was made that began with a simple state-conditioned L-system and repeatedly created a set o
f mutations, from which the fitness function was used to determine which one best resembled a corn plant. The best mutation was subsequently used to creat
e more mutations, and this process was repeated for 1000 generations.<br><br>In the end, the process was able to produce a state-conditioned L-system which
resembled the corn plant more closely than the initial state-conditioned L-system did. However, the ability of the produced L-system in modeling the corn plant
was limited – branching appeared in the same general locations in both the desired plant and the model, but much of the subtleties of the desired plant could
not be reproduced in the produced model with this method. Different fitness functions, though, could yield drastically different results, and this method still has
promise for aiding future plant structure research.
2011 - CS021
AILAB - SCRIPTING LANGUAGE FOR ARTIFICIAL INTELLIGENCE
Ionut Alexandru Budisteanu
High School "Liceul National Mircea cel Batran", Ramnicu Valcea, Valcea, ROMANIA
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
AILAB is a scripting language which allows to create fast applications which contains Artificial Intelligence. The software have many paradigms of AI:<br><br>Neural network(MLP, Kohonen, Carpenter, SOM, Hopfield)<br><br>-Fuzzy techniques<br><br>-Genetics algorithms.<br><br>-Logical programming<br>
<br>Software can export a DLL that contain your work, and you can import in other programming language, like C, C#, etc.<br><br>You can use AILab in a
wide range of applications, including image processing, image classification, eye-tracking, robot-control, pattern recognition, forecasting(weather, financial,
energetic) ,signal processing, expert system, any applications of conexionist models.<br><br>AILab allows you to type in a "single command" that would take
pages of C, Java language to reproduce the exact same function. The software is oriented for easy creation of applications which contain Artificial Intelligence.
<br><br>To prove the capacity of my programming language I had created many examples: -Speech recognition, Face recognition, Vocal User Interface(which
speak in Romanian), Thief recognition, Recognition of spectral fingerprint from a flex, human-computer interface for blind, Web browser with reconfiguration
automatically from face classification of user, Optical Character Recognition, Automatic form processing, face classification, clustering and others, minimal
examples with genetics algorithms and logical programming.<br><br>The language allows for easy code tracking, it contain syntactic tree, watch, step by step,
breakpoints, run to cursor for debuging, intellisense, help and many other facility. The resulting code can be used in any programming language. Application is
designated for Windows platforms.<br><br>This software can be used very easy in academic field to study Artificial Intelligence.
Awards won at the 2011 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel
2011 - CS034
ON THE CREATION OF MUSIC THROUGH THE APPLICATION OF GENETIC PROGRAMMING TO FRACTAL SYSTEMS
Scott Borden Cesar
Scott Borden Cesar, Santa Clara, CA
My project was on the creation of music with genetic programming and fractal systems, i.e. teaching a computer to compose music for me. Many people have
tried creating computer composition systems before, but my approach was made relatively fresh by marrying two techniques that had seen some level of
success together in hopes of finding a working approach. <br><br>The goal of the project was creating a system by which a person without composing talent
could create music; which has applications in fields such as game design, where a person might need music for their game but probably won’t have the
composing talent to create it. <br><br>What I did was create a system that produced pieces of fractal music, none of which sounded very good. Then the next
step was to write a genetic system which would assemble pieces of fractal music into a larger more coherent and hopefully better sounding piece. The idea
being that this would allow the computer to learn to compose pieces based on user input.<br><br>Because time constraints didn’t allow me to create any
automatic evaluation tools and without them one couldn’t iterate the system often enough to search any appreciable amount of space, the whole system
became hard to use. Due to this, I couldn’t produce pleasing results, making the project, while not an outright failure, not a great success either.
2011 - CS045
HIGH SCHOOL FACEBOOK NETWORK ANALYSIS
Angela Chen
Nicolet High School, Glendale, WI
Network science was recently identified as an insight of the past decade. Network analysis, which emphasizes interconnectedness, is a powerful tool to study
systems of all kinds—social, biological, economic, etc—including social networking sites. Facebook, the world’s largest online social network, has been the
subject of multiple social network analysis studies. <br><br>This study measures and analyzes a Facebook friend network of 117 freshmen from a high school,
with 3432 links between them. Students were invited to participate through accepting friendship invitations from the project account on Facebook. At different
time points, the network data was collected using the Facebook application netvizz, then analyzed with Gephi, a network analysis software. A single highly
connected community was identified. Results indicate a well-connected core of high-degree nodes surrounded by a periphery of lower-degree nodes. As found
in other studies, cumulative degree distribution conforms to Gaussian distribution, suggesting that the network is single-scale. Those with fewer friends have
greater tendency to cluster, as was previously found for the entire Facebook network. The study also includes the network’s evolution over time, relationships
with student performance, factors such as profile history, and gender differences.<br><br>This is the first study of online social networks of high school
students that the author is aware of. Friendship networks, especially during adolescence, have been shown to influence academic performance and behavior.
This study exemplifies the potential of network science in providing insight into the relationship between social network structure and behavior for high school
students.
2011 - CS038
DEVELOPING AN ADAPTIVE DISASTER EVACUATION SIMULATION
Francis Xinghang Chen
Penn High School, Mishawaka, IN
This work presents an approach to developing a proof-of-concept Dynamic Adaptive Disaster Simulation (DADS) in the Netlogo 4.1.1 programming language
and modeling environment. DADS is a system capable of predicting population movements in large-scale disasters by analyzing real-time cell phone data. It
has been difficult for existing computer models to accomplish such tasks--they are often too inflexible to make realistic forecasts in complex scenarios. This has
led to reactive, uninformed emergency response tactics with disastrous consequences. DADS resolves these issues by continuously updating simulations with
real-time data. It accomplishes this by tracing movements of cell phone users on a Geographic Information System (GIS) space, then using geospatial
simulation algorithms to infer regional preferences. Inferences are incorporated into agent-based simulations which model future population movements
through fluid dynamics principles. Due to privacy concerns, this research utilized synthetic data that were generated to mimic the cell phone location data
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
associated with a recent disaster. Validation techniques such as Manhattan distance show that the simulation is both internally and predictively valid. DADS
can adaptively generate accurate movement predictions in a variety of disaster situations, demonstrating a modeling paradigm that is highly applicable to
population modeling and to other disciplines of computer simulation.
Awards won at the 2011 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel
Tuition Scholarship Award in the amount of $8,000 - Office of Naval Research on behalf of the United States Navy and Marine Corps
2011 - CS017
MAINTAINING VIEWING QUALITY WITH LOWER NUMBER OF LEDS
Yu-Jung Chen
The Affiliated Senior High School of National Kaohsiung Normal University, Kaohsiung City, CHINESE TAIPEI
The purpose of this research is to reduce light emitting diode (LED) needed on display panels while maintaining satisfactory viewing quality. In general, viewing
distances, LED sizes and LED panels (packing density) all contributed to the perceived viewing quality. A LED display is said to have good perceived viewing
quality if the individual LEDs are indistinguishable while the display as a whole is not overly bright.<br><br>Rayleigh criterion defines the angular resolution of
an optic system, including the human eyes. In short, Rayleigh criterion defines when two neighboring LEDs can become indistinguishable if viewed at a
distance. Therefore, relationship between viewing distances, LED placements and LED panels can be experimentally determined.<br><br>Experiments were
carried out to test the LED display viewing quality when viewed at various distances using different sized LEDs and LED panels. It was determined that having
more LEDs does not necessarily lead to better viewing quality. Based on the experimental results and Rayleigh criterion, a computational model was derived
implemented. Given the parameters for a given LED panel setup, the number of LED savings that is achievable is computed. Furthermore, the actual LEDs to
be turned on and off to achieve the saving are also determined.
Awards won at the 2011 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel
2011 - CS009
COMPOSING FRUSTA TO FOLD POLYHEDRAL ORIGAMI
Herng Yi Cheng
NUS High School of Mathematics and Science, Singapore, Singapore, SINGAPORE
This work proposes an original and novel algorithm to draw crease patterns that fold into extruded polyhedra composed of vertically stacked frusta. The crease
patterns of the individual frusta were drawn using the special case of a method the author has previously published to fold biplanars, or the convex hulls of
nonintersecting planar polygons. The algorithm derived in this project uses a new strategy to compose the crease patterns of the individual frusta to obtain the
crease pattern of the whole polyhedron. A computer program has been written to implement the algorithm automatically, allowing users to specify a target
polyhedron and generate its crease pattern.<br><br>Applications of the algorithm include folding a quadrangulation of solids of revolution so as to approximate
their outer surface. Any polyhedron can be decomposed into a chain of tetrahedra to be sequentially extruded and thus folded. Programmable matter can also
be folded into extruded shapes, producing tools that mechanically shapeshift to serve different purposes under different demands. Finally, composing frusta to
fold polyhedral origami has many potential benefits for the fabrication of nanostructures. This possibility is brought closer by recent success in simulated results
of water nanodroplets that activate and guide folding of graphene nanostructures.
Awards won at the 2011 ISEF
Second Award of $500 - Association for Computing Machinery
First Award of $3,000 - Computer Science - Presented by Intel
Second Award of $150 - Patent and Trademark Office Society
2011 - CS055
ACTIVE NOISE CANCELLATION IN HUMAN-ROBOT SPEECH INTERACTION
Jao-ke Chin-Lee
Stuyvesant High School, New York, NY
One of the principal modes of human interaction is speech: we recognize, filter, and process speech almost without thinking about it. It follows that speech is
also very natural for humans to use in human-robot interaction—as it transpires, in many applications, it is also more practical than the current modes of
computer input and output (e.g. keyboard interface). Filtering out undesirable noises is critical for proper comprehension of commands, whether by robots or
humans. Prior research has focused on notch filters and spectral subtraction, which have limitations in frequency focus and by filtering out speech, which we
want to keep. <br><br>This project, using a new combination of processes, will result in significantly improved noise filtering which is critical for speech
recognition. Beeps will be identified based off frequency and amplitude data extracted with a Fourier transform of the input wave, and amplitude and phase
estimates will be obtained and compounded to form estimated beep waves that will be subtracted from the initial waveform to reduce the power of beeps.
Enhanced, accurate sound recognition and interpretation that can screen out unwanted noise signals will allow a multitude of machines and robots, including
cars, medical equipment, manufacturing equipment, to be greatly improved to enhance safety and productivity. The application of this research is for natural
language processing and speech recognizing machines, the autonomous forklift in particular, but future applications may include noise filtering for humans with
impaired signal recognition, e.g. humans with ADHD noise sensitivity.
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Awards won at the 2011 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel
First Award of $1,500 - Synaptics, Inc.
2011 - CS056
SEOR: SIMULATED ENVIRONMENT FOR OBJECT RECONSTRUCTION
Elliott Suk Chung
Gwinnett School of Mathematics, Science, and Technology, Lawrenceville, GA
3D object reconstruction (scanning) has been studied extensively in the research community. However, there is no open and standard framework to study the
algorithms that makes this technology possible. In this research, a simulated framework called SEOR (Simulated Environment for Object Reconstruction)
capable of animating object scenes for 3D scanning is developed. Using SEOR, photo-realistic images can be generated specifically for a scanning
methodology based on the user-defined parameters without time-consuming camera and projector calibrations. This standard approach allows researchers to
experiment with the same set of camera and object properties. To facilitate the experimentation of different parameters in 3D scanning process, a GUI is
developed for rapid animation and object scanning. SEOR integrates an open source 3D graphics engine that animates photo-realistic images with accurate
ray-tracing capability for modeling shadows and textures. Two popular scanning approaches – sweep plane and structured lighting – are studied. The
experimental results allows one to compare the simple but time-consuming sweep plane approach versus the more complex but faster structure lighting
approach. The results demonstrate the potential of using this framework for studying emerging scanning algorithms.
Awards won at the 2011 ISEF
Agilent offers paid summer internships at an Agilent site that aligns with the student and his/her background. - Agilent Technologies
Third Award of $1,000 - Computer Science - Presented by Intel
2011 - CS048
NOTE TO SELF: A TRANSCRIPTIONAL STUDY OF AUDIO FILES USING FOURIER TRANSFORMATION AND NEW APPLICATIONS
Ryan Kyong-Doc Chung
Terre Haute South Vigo High School, Terre Haute, IN
An audio file can be transcribed into musical notes by singing into a microphone using cell phone technology with audio signal converting algorithms such as
Fourier transformation.<br><br> I wrote a program that converts a wave file in time domain into frequency domain using a Fourier transformation estimation
algorithm, the Welch's Power Spectrum Density Estimation. The program then finds the matching frequency peaks and converts the Fourier transformed files
into musical notes. The musical notes are played and compared with the actual audio-recorded files. <br><br> The results indicate that the sound recording
quality is important. Higher pitches and slower music, such as whistling slowly into the microphone, converts to musical notes more accurately. Since frequency
peaks are sharper and narrower, determining the musical notes from musical instruments is more precise than from a recording of a voice. An overtone
monitoring algorithm and audio frequency filters are necessary to remove interferences from the overtone peaks and noise. Tone shifting requires a larger
number of time steps in order to estimate the changes. <br><br> The quiz mode was developed to test ones musical and audio memory. The composition
mode is used to identify the similarities between musical scores and estimate the deterioration of the memory in Alzheimer’s patients.<br><br> Future uses of
this technology include tone-deafness and speech therapy diagnostic tools and aids, a language learning tool for intonation speech patterns, and speech
pattern diagnostics and aids for autism and hearing impaired.
Awards won at the 2011 ISEF
Certificate of Honorable Mention - Acoustical Society of America
2011 - CS015
COMPUTER MODELING IV: A PARTICULATE DISPERSION MODEL EMPLOYING REAL-TIME WIND CALCULATIONS
Jessica Marie Constant
Poudre High School, Fort Collins, CO
Computer modeling is used to predict complex systems such as environmental or atmospheric conditions. Over the past four years, I have been developing an
atmospheric computer model.<br><br> <br><br>The goal of this year’s project phase is to create a program capable of accurately modeling the dispersion of
particulates in the atmosphere. To that end, I introduced real-time wind calculations, vertical advection and vertical diffusion to the model. These additions allow
the model to behave realistically as wind patterns shift. My model is now able to show an accurate representation of pollution dispersion.<br><br>
<br>Horizontal and vertical advection, diffusion, and temperature are influencing factors implemented via functions, allowing the effects of each to be seen
either on their own or in conjunction with each other. I modified my model to employ an “A-Grid” which maintains the horizontal wind, pollution and temperature
variables in the same location on the grid. I also incorporated concepts and ideas from both my Lagrangian and Eulerian models.<br><br><br>The data
produced by the model was analyzed in 2 and 3 dimensional plots made with Gnuplot and MATLAB. The graphs were used to visualize the results and to
isolate programming errors during development.<br><br><br>The amount of data necessary to analyze pollution dispersion requires a computer model and a
graphical depiction of the model’s output. My model can be used to predict and view the dispersion of pollution particulates released into the Atmosphere. The
addition of real-time wind calculations has yielded a true atmospheric model.
Awards won at the 2011 ISEF
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Certificate of Honorable Mention - American Meteorological Society
2011 - CS002
CREATION AND NAVIGATION OF A 3D ENVIRONMENT WITH STEREO VISION, A CONTINUATION
Dylan Cooper Dalrymple
Pensacola High School, Pensacola, FL
3D data can be used in robotics to navigate, reconstruct for interpretation, and to manipulate an environment. Though this data is useful, obtaining it accurately
in hobby robotics can be difficult and expensive. Stereo vision, if implemented correctly, can solve these problems. The purpose of my project was to create a
stereo sensor, a robot to accommodate it, and the algorithms to convert the image data to a 3D map and to interpret that data to navigate the environment.<br>
<br>The stereo vision sensor I created consisted of two color webcams. I built the robot using a computer motherboard and its necessary components, an
Arduino microcontroller with a motor controller, and a 4 motor system.<br><br>For stereo image processing, I used the Sums of Absolute Differences (SAD)
algorithm to construct a disparity map and the Stereo Distance algorithm (based on previous research) to convert a disparity map to an actual distance map.
The SAD algorithm uses scan windows for each pixel to find the window that best matches and sets that distance as the disparity for that pixel. The Stereo
Distance algorithm converts a single disparity value to a distance value using trigonometry, specifically the Law of Sines.<br><br>The results from these
algorithms were accurate and processed quickly in test stereo pairs that were taken by calibrated cameras, but because my stereo sensor was not calibrated
properly, the results from the robot were unsuccessful. With additional experimentation, I can correct inaccuracies and improve the robot’s capabilities.
Awards won at the 2011 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel
First Award of $2,500 - SPIE-The International Society for Optical Engineering
2011 - CS308
MULTITEST
Nayely Jara Diaz de Legon, Tania Janeth Rodriguez Diaz de Leon,
Centro de Bachillerato Tecnologico Industrial y de Servicios NO. 168, Aguascalinetes, MEXICO
A survey performed in the city of Aguascalientes, Mexico; indicates that the High School teachers manually review the tests applied (ON PAPER) to their
students, this process is: slow, with chance of error, laborious, tired and boring; to this situation the next question araises: Is it possible to create a software
system to review these tests in less time and decrease the possibility of human error?<br><br> <br><br>To answer this question the project "Multitest" has
been created, which is a computer system that allows teh reviewal of multiple-choice tests through the following processes: 1) A Picture of the answer sheet is
captured with a web camera. 2) A digital image is analyzed in order to identify student responses. 3) Review of the test, comparing the student's answer with
the correct answers, 4) provide of a printed result, on-screen, on the data base, and / or audio.<br><br> <br><br>BASED ON THE TESTING PERFORMED,
THE RESULTS INDICATE THAT: 1) THE TIME HAS DECREASED AT A RATE HIGHER THAN 90% AND 2) THE MARGIN OF ERROR HAS BEEN
REDUCED TO LESS THAN 1%
2011 - CS054
VAAC (VIRTUAL ACTIVATION OF APPLICATION FOR COMPUTER)
Erendira Citlalli Jara Diaz de Leon
Centro de Bachillerato Tecnologico Industrial y de Servicios NO. 168, Aguascalientes, MEXICO
The project VAAC (Virtual Activation of Application for Computer) emerged with the goal of developing a virtual environment without the use of mouse or
keyboard it allows navigation between of n slides that make up a VAAC.<br><br>VAAC is a software system focused on the design and activation of
presentations made by n slides of information, where each slide (m) can be composed of the following objects: titles, text, images, multimedia applications,
documents in archives type DOC, PDF and / or XLS. It has the advantage of navigating between slides receiving Virtual instructions thru the use of the hand on
the slide indicating some button of addressing (Next/ Previous) substituting the use of the mouse and the keyboard. The instructions considered by the use of
the hand on the slide are: 1.- Navigate to the "Next Slide" (m +1), <br><br>2 .- Navigate to the "Previous Slide" (m-1) <br><br>Processes considered for
implementation of project are:<br><br>1). Preliminary investigation, 2). Determination of hardware and software requirements, 3). System design, 4). Software
development, 5). system testing and 6). Implementation and evaluation.
2011 - CS053
PROBABILISTIC INFERENCE OF PATHWAYS OF INFECTION
Anqi Dong
Walter Murray Collegiate Institute, Saskatoon, Saskatchewan, CANADA
Inferring pathways of infection from epidemiological data on infection spread could greatly benefit healthcare systems by informing decision-making towards
infection control.<br><br> A general Bayesian approach was developed to systematically infer information on the transmission of infection between persons. In
this algorithm, pathways of infection (spanning trees) of arbitrary contact network were enumerated, and the probability of each such pathway of infection was
calculated. Using the probability and properties of each possible pathway of infection, measures that could be useful for healthcare systems were calculated.
Such data include highly probable potential pathways of infection and probable transmission pairs. In addition to developing a general inference approach,
surprisingly simple results for some special sets of data were found.<br><br> To evaluate the potential of this approach, an algorithm that used these inference
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
techniques was implemented. To evaluate the quality of inference, a simulation model of infectious disease was constructed and used to conduct a series of
experiments. For each experiment, the simulation produced a synthetic contact graph, infection pathways for that graph, and patients’ clinical histories. The
contact network and some clinical history data were then used by the inference algorithm. For cross-validation, inferred infection pathways and other results
were compared to “ground truth” data produced by the simulation but not used by the inference algorithm.<br><br> The inference algorithm was found to be
accurate for many classes of data. Inference shows promise for serving as a powerful, versatile tool for assessing infectious disease spread.
2011 - CS023
EFFICACY OF CROWDSOURCING IN GENERATING ACCURATE SEMANTIC METADATA
Tucker Sterling Elliott
Suncoast Community High School, Riviera Beach, FL
Crowdsourcing, or using the computational power of a group of humans to complete a task, appears to be effective in fields such as image recognition and text
digitalization. The goal of this research is to determine the effectiveness of crowdsourcing in generating classification tags for a variety of words in different
categories. For instance, “Pepsi” is tagged as a beverage, while “Avatar” is tagged as a movie. To obtain a crowd willing to help generate tags, an online game
was created in which players are asked to type in as many words of a particular category that they can think of that begin with a particular letter. Data from all
players was collected and a variety of data analysis methods were used to confirm the validity of each entry. If more than a certain percentage of players enter
a particular phrase, that phrase is considered valid. An alternative analysis strategy keeps track of player performance, giving better players a higher weight.
This revised analysis method, as well as a typo detection system utilizing Levinshtein distance, proved effective in increased the number of entries deemed
valid while reducing false positives. The system proved capable of generating complete lists of correctly tagged data belonging to small categories such as US
States, and capable of generating fairly comprehensive lists of other larger categories such as movies.
2011 - CS013
ANKA - A NEW KIND OF ENCRYPTION ALGORITHM
Mehmet Emre
Ozel Sunguroglu Fen Lisesi, Sahinbey, Gaziantep, TURKEY
This project aims creating a safer and better symmetrical encryption algorithm. The initial idea was taking good parts of AES finalist algorithms and combining
them together. But they were too much different from each other. So I decided to create a new encryption algorithm based on different mathematics.<br><br>I
analyzed two of the AES finalists: Rijndael (The AES winner) and Serpent. I looked the mathematics behind them. They were mostly based on algebra. And I
found a mathematical foundation never been used in modern cryptography: Magic Squares. I searched previous works on generation of magic squares and I
found the Asker-Ali Abiyev's algorithm of generating balanced squares. The balanced squares provided bigger and variable block sizes depended on the key.
So I built an algorithm using balanced squares for substitution, bitwise exclusive or operation and key scheduling. And i profiled my algorithm, changed the
slower parts, analyzed it and rebuilt it twice.<br><br>After creation of Anka I created a benchmark including unoptimized versions of Anka, Rijndael, Serpent
and Twofish. Anka was the slowest in general (%5 slower than Twofish). But it was still fast enough. And the speed loss is mostly because of creation of
balanced squares. <br><br>And Anka's memory usage is bigger than other algorithms because of the bigger block size.<br><br>After all Anka provides better,
more flexible and safer symmetrical encryption. It can work on PCs, modern smartphones, servers etc. And it is usable for personal and commercial security.
2011 - CS044
A NOVEL FRAMEWORK FOR QUASI-DYNAMIC TASK SCHEDULING ON PARALLEL COMPUTERS
Jonathan Abraham Goldman
Plainview-Old Bethpage John F. Kennedy High School, Plainview, NY
A unique solution to the scheduling problem on massively parallel computers is given. The problem, NP-Complete in general, involves dispatching subtasks of
a parallel job onto processors such that the total execution time is minimized while meeting all data dependencies. Most existing work relies on
oversimplification of the problem, such as the assumption of a completely-connected network. My work presents a framework designed to build upon these
heuristics while relaxing such constraints. The “quasi-dynamic” model, in which both the parallel job and the network architecture are represented by arbitrary
graphs, is adopted. The crux of my framework, called Colony Scheduling, is the classic idea of divide-and-conquer: both the parallel job and the network
architecture are divided into dense regions. Dense regions of the parallel job are assigned to dense regions of the network architecture. This framework is
implemented around two existing algorithms, 405 random task graphs, and 5 network topologies. Colony Scheduling is shown to improve solution quality by up
to 80%. In addition, scalability is improved. An in depth analysis of the results is presented, and trends based on graph size and density are illuminated.
Awards won at the 2011 ISEF
Each winning project will receive $2,000 in shares of UTC common stock. - United Technologies Corporation
2011 - CS019
INTELLIGENT POWER MANAGEMENT OF COMPUTER PERIPHERALS USING DOMOTIC TECHNOLOGIES
Ankush M Gupta
duPont Manual Magnet High School, Louisville, KY
As technology has developed, computers, printers, modems, routers and other computer peripherals have become increasingly prolific in people's lives.
Increasing use of technology has led to an increase in both energy consumption and the carbon footprint, much of which can be prevented by using efficient
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
techniques for managing the power states of these devices. Domotics, the science of using electronic techniques to manage household devices, has sufficiently
evolved that it can be programmatically implemented in an efficient and inexpensive manner.<br><br>The purpose of this application is to utilize domotic
technology to manage the power states of computers and computer peripherals using PowerLine Carrier signals both efficiently and intelligently. The
application consists of several components to facilitate extensibility, including a main interface, an application programming interface for a plugin system, and a
web-service plugin. The plugin system was also used to create a Viterbi-based algorithm that optimizes the energy savings of printer usage based on prior
usage data. The algorithm was shown to have power savings of 20% over the traditional method in a simulation using printer logs from a period of one year. In
addition, the same algorithm can be extended to be compatible with devices other than printers to result in even more energy savings.<br><br>The application
was successfully created and exceeded the design specifications. It is undergoing testing to further improve the algorithm used for printer management and
also improve performance in high-interference environments.
2011 - CS020
VIFIT: AN OPEN SOURCE PROGRAM TO EVALUATE POTENTIALLY SEIZURE-INDUCING MEDIA
Alexander Eskil Harding
Cleveland High School, Portland, OR
Online, free, instant, and worldwide distribution of videos is dangerous for those with photosensitive epilepsy (PSE). Even with readily available standards and
programs to evaluate and test videos for flickering and flashing that can induce seizures, videos are often not screened or labeled. The Photosensitive Epilepsy
Analysis Tool (PEAT) is the only free tool of three tools available (until now) to scan for flickering defects. The license for PEAT allows only web-based
noncommercial material to be tested, implicating that individuals with PSE must trust advertisers and other commercial producers to screen and label videos.
<br><br>I have built an open source program and engine, the Video Flicker Investigation Tool (VIFIT). Built under C and executable under multiple different
operating systems, this tool accurately assesses luminance and red flash in content that could potentially cause seizures in people with PSE. Only one of the
71 videos tested through VIFIT conflicted with PEAT where PEAT passed the video and VIFIT failed it (by six frames of luminance flash).<br><br>Being open
source, VIFIT can be used by the commercial video distributor, the viewer, as well as in statistical scientific video analyses to screen content without any
licensing constraints. Also being open source, this program has the potential today to become broadly recognized and more stable, and grow into the platform
and underlying open source engine for PSE video accessibility tools of tomorrow -- addressing my overall objective of increasing internet safety and
accessibility.
2011 - CS303
THE RESEARCH ON THE SPACE INTERACTIVE 3D MAPPING METHOD
Yizheng He, Haoyan KANG, Jiayi WANG
Northeast Yucai School, Shenyang, Liaoning, CHINA
Current three-dimensional mapping methods are mostly based on the processing of precise two-dimensional data, but not direct 3D operations. To implement a
more straightforward way of three-dimensional mapping, we got our inspiration from ancient Chinese craftsmen who managed to create marvelous 3D objects
without any precise orientations. <br><br><br>In this research to achieve direct 3D mapping, we designed a brand new Computer-Human Interaction tool, the
Space Interaction: users may press, stretch or rotate the hardware to sculpt out 3D objects in computers, therefore better reflecting their inspirations. <br><br>
<br>The CHI data describing user operations are generated by the displacement of several button analog points with different positions, and by the altitude
changes and angle changes simulated by two potentiometers. Several Micro Controller Units collect these data and send them to the computer. A VC++ 6.0
program processes the CHI data and builds 3D objects through OpenGL. When connected to computers, our device is recognized as a standard Human
Interface Device and can thus respond to Windows API functions, which makes it compatible with other 3D mapping software.<br><br><br>We tested our
mapping method among users. From the feedbacks, we found our mapping method capable of building basic 3D graphics. Compared with other mapping
methods, Space Interaction demonstrated its advantages in directness and its ability to improve the working efficiency of 3D modelers.
Awards won at the 2011 ISEF
Second Award of $1,500 - Computer Science - Presented by Intel
2011 - CS305
A GENETIC ALGORITHM APPROACH TO MINIMIZING BEAM LOSS IN HIGH POWER PARTICLE ACCELERATORS
Yajit Kumar Jain, Scotty Chung, Carlos del-Castillo-Negrete
Oak Ridge High School, Oak Ridge, TN
High power particle accelerators are becoming increasingly vital in many areas of scientific research including particle physics, materials science, medicine,
and alternative energy. However, the availability of accelerators is drastically reduced by the problem of beam loss, since losing even a small fraction of the
beam can damage the hardware and disrupt accelerator availability. Understanding and controlling beam loss using theoretical models is challenging since
beam loss in the accelerator is inherently stochastic, noisy, and prone to drifts. Currently, many facilities are limited to minimizing beam loss via manual tuning
of the accelerator. In this paper we propose a more efficient automated system. Our technique uses a genetic algorithm to minimize the beam loss function. At
the intersection of biology and computer science, genetic algorithms imitate evolutionary processes such as reproduction, mutation, and natural selection. For
practical implementation, we modified the standard genetic algorithm to optimize within the constraints of the accelerator. Additionally, we created a user
interface for control room operation. Our method was successful in reducing beam loss on both simulations and the real accelerator. This novel technique for
reducing beam loss will serve as a point of reference for current and future high power accelerators.
Awards won at the 2011 ISEF
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
Team First Award of $500 for each team member - IEEE Computer Society
Fourth Award of $500 - Computer Science - Presented by Intel
2011 - CS011
ADDITIONAL POV DISPLAY FOR YOUR COMPUTER
Dmitry Alexandrovich Kabak
Polytechnical Gymnasium N6, Minsk, BELARUS
My project is dedicated to building some accessory device for personal computer to display additional information like incoming messages, memos, time/date
and so on. I decided to use POV technology for building such device, because it greatly increases usability and beauty, but decreases the price of whole
device. My work consists of two parts: hardware and software.<br><br>The device itself consists of several homemade PCB boards with electric components
on it, spinning basement which holds it, motor with propeller to spin it and some IC's to connect the whole thing to computer. The display itself has three boards
with 16 LEDs on each of them, which create glowing and like hovering in the air picture. The display also has one board with AVR processor, which runs my
firmware and controls all the LEDs and wireless communication. I have another AVR that works on the other (still) side and transmits data to display. It also
connects to USB port of computer. All components were made by me and with the help of my schemes almost anyone can make his own copy of such display.
<br><br>I also created the program to control all this AVRs. The program acts like the server and receives messages from other programs via standard WinAPI
methods. There how any program (of course, if it has access rights) can interact with my display. As the example of such interaction I made plugins for Winamp
player, QIP program and Total Commander that allow displaying data from these programs on my display. Also, the API of communication is completely open
to allow anyone to write programs and plugins with support of such displays.<br><br>So, the aim of my project is to make POV displays much more common,
than they are now. The POV technology allows creating of useful and good-looking accessories.
2011 - CS040
ENHANCED VISUAL ACUITY OF THALAMIC VISUAL PROSTHESIS SIMULATION USING HEAD AND EYE TRACKING
Sameer Kailasa
American Heritage School Plantation, Plantation, FL
A simulation of a thalamic visual prosthesis system was enhanced with a head tracking feature to study the effects of gaze tracking (both head and eye), eye
tracking, or head tracking alone on visual performance. Head tracking functionality was developed using a system that incorporated infrared light emitting
diodes, a web camera, and an open source computer program called FreeTrack. A C++ program was developed to facilitate communication via serial port
between the head tracking module running on the Gaze computer and the simulator running on the Control computer. Prosthetic vision was simulated by
superimposing the images of optotypes from the standard Snellen chart with 500-count phosphene pattern, and further varying the position of the composite
optotypes based on the head and/or eye position parameter values received from Gaze computer. The visual acuity of eleven normal, sighted subjects was
evaluated under different conditions using the two-alternative forced-choice letter recognition method.<br><br> There was a significant increase in visual acuity
in gaze types where head movements were considered (in the case of both head-and-eye movement, and the case of head movement only) compared to the
case of eye movement only. This demonstrates that the incorporation of head tracking brought the simulator closer to real life conditions, apart from providing
better system of testing to answer questions regarding logistics before a prototype device is built.
Awards won at the 2011 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel
2011 - CS052
MONTE CARLO SIMULATION OF A SERIAL DILUTION PCR EXPERIMENT
Jonah Milton Kallenbach
Germantown Academy Upper School, Fort Washington, PA
The purpose of this project was to determine the number of viral integrants in a clone given by a researcher through a Monte Carlo computer simulation of a 96
well plate undergoing a Polymerase Chain Reaction, and then to examine the efficiency of PCR machines as used in serial dilution experiments in a variety of
situations.<br><br>The programming language R was used to write a simulation performs the same action on a 96 well plate as a PCR machine using Monte
Carlo Simulation. First, this simulation was used to find the number of integrants in a sample of Woodchuck liver from a Woodchuck infected with
Hepatocellular Carcinoma that had a PCR experiment performed on it by a qualified scientist. This simulation can be used to analyze the efficiency of PCR
machines with other samples. Analyzing the efficiency of 96 well plates in a variety of situations was accomplished by creating a library of functions that try out
every factorization of 96 on a 96 well plate and picks the best combination of rows, wells, dilution and the number of integrants, using a set of statistical tests.
<br><br>It was found that a model could be created to successfully find the approximately correct number of viral integrants in the sample, and that this model
could be further applied to examine the efficiency of 96 well plates in PCR Serial Dilution experiments. Among the combinations of rows and wells that did not
exhibit significant selection bias, based on one particular loop in ‘jwidth.r’, which was one of the functions in the library, the 8 x 12 combination had the least
error, making it the best combination. This result is being checked for accuracy by the programs above, however finding results that are statistically sound takes
thousands of hours of computation on a <3 GHz computer.
Awards won at the 2011 ISEF
Third Award of $1,000 - Computer Science - Presented by Intel
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2011 - CS062
DO SAT PROBLEMS HAVE BOILING POINTS?
Soumya Chakrabarti Kambhampati
McClintock High School, Tempe, AZ
Boolean Satisfiability problem, called SAT for short, is the problem of determining if the variables in a set of boolean constraints can be set to values so as to
satisfy the entire set of constraints. Because SAT models the problem of choice making under constraints, SAT solvers have become an integral part in many
computations, like scheduling software, and decision making for robots. Given their practical applications, one question is when SAT problems become hard to
solve. The problem difficulty depends on the constrainedness of the SAT instance, which is defined as the ratio of the number of constraints to the number of
variables. Research in the early 90’s showed that SAT problems are easy to solve both when the constrainedness is low and when it is high, abruptly
transitioning (“boiling over” ) from easy to hard in a very narrow region in the middle. My project is aimed at verifying this surprising finding. I wrote a basic SAT
solver in Python and used it to solve a large number of randomly generated SAT problems with given level of constrainedness. My experimental results showed
that the percentage of problems with satisfying assignment transitions sharply from 100 to 0 as constrainedness varied between 4 and 5. Right at this point, the
time taken to solve the problems peaks sharply. Thus, SAT problems do seem to exhibit phase transition behavior; my experimental data supported my
hypothesis.
Awards won at the 2011 ISEF
Trip to attend the Taiwan International Science Fair. - National Taiwan Science Education Center
2011 - CS036
VIRTUAL PRIVATE NETWORK USING PEER-TO-PEER TECHNIQUES
Daniel Kasza
Massachusetts Academy of Math & Science, Worcester, MA
The low performance of traditional, client-server model based, virtual private networks (VPNs) led to the investigation of using peer-to-peer communication to
improve the bandwidth and latency of the communication between the connected clients. A new peer-to-peer connection based VPN protocol was engineered.
<br><br> The protocol uses both TCP (Transmission Control Protocol) and UDP (User Datagram Protocol) communication to transfer Ethernet frames between
the connected clients over IPv4 and IPv6 networks, and it improves the network performance by making direct communication for the clients possible. An IPv4
compatible implementation was done in Java and C programming languages using the Java Native Interface.<br><br> The tests were done using Ubuntu Linux
on three computers connected to a test computer network, and it was concluded that the new protocol has better client-to-client performance than traditional
VPN protocols, while it decreases the load on the server.<br><br> The protocol can be used to create VPNs for applications that require low latency
communication including computer games and Voice over IP. Because the protocol encapsulates Ethernet frames, it can be also used to interconnect separate
Ethernet networks.
Awards won at the 2011 ISEF
Fourth Award of $500 - Computer Science - Presented by Intel
2011 - CS059
COLORIZATION OF THE FACE, BY THE FACE AND FOR THE FACE
Heesung Kim
Gyeongnam Science High School, Jinju-Si, Gyeongsangnam-Do, SOUTH KOREA
This paper introduces an automated facial colorization method based on human anatomy.<br><br>Colorization is the computer-assisted process of coloring
gray-scale images or movies. The process involves segmenting<br><br>images into regions and specifying color for each region. There have been many
studies on methods for colorization.<br><br>Based on these studies, many tools have been made for colorization. but most methods require considerable
user<br><br>intervention and remain time-consuming, and expensive.<br><br>The face is important factor in the photo to be colorized. The skin has different
colors in each part of the face, so it has to<br><br>be considered to colorize the face more realistically.<br><br>To solve these problems, I used facial
recognition and simulation based on anatomy.<br><br>I use a template-based method for face detection and segment the face into 4 parts based on
anatomical features.<br><br>Next, I guess the race of the person by using facial structure. I simulate skin in each part of the face based on anatomical<br>
<br>values and the thickness of epidermis.<br><br>I demonstrate that this proposed method guarantees accuracy without user intervention by comparing a
colorized face<br><br>with the original.
2011 - CS028
IMRT V. 3DCRT: MODULAR EFFICIENCY IN RADIOTHERAPY OF PITUITARY ADENOMA
Marissa Nicole Komanski
City Honors School, Buffalo, NY
In this study, Varian Eclipse Treatment Planning System was used to assess the total planning efficiencies of its two major planning techniques, intensitymodulated radiation therapy (IMRT) and 3D conformal radiation therapy (3DCRT), on a pituitary adenoma tumor. To set a constant in the experiment, the same
set of patient data was used throughout both planning techniques; having the same patient data ensures that all plans generated can be analyzed the same
way. Using this plan, a set of fourteen plans were generated for each technique, 28 total, with each plan hypothetically delivering a set dose of 50.4 Gy in 28
fractions (1.800 Gy/fraction) to the patient. After computerized dose calculation in Varian Eclipse based on the beam angles and weights entered into the
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
system, results were generated in the form of dose-volume histograms that were analyzed to produce numerical results. The results were then evaluated based
on two basic dosimetric indices, conformity index (COIN) and integral dose (ID), and compared to the “Ideal Plan” where COIN=0 and ID=1 to determine the
efficiency of one plan over another. After all plans had been analyzed, it was determined that the IMRT method of planning was able to more efficiently treat a
pituitary adenoma tumor, delivering as uniform dose to the target as possible and sparing more surrounding tissues when compared to the results generated by
the 3DCRT method.
2011 - CS309
THE MULTIMODAL REAL-TIME RECOGNITION OF EMOTION IN HUMAN SPEECH
Akash Krishnan, Matthew Fernandez,
Oregon Episcopal School, Portland, OR
Using the C# programming language, an English emotional speech database with 1315 files and four emotions (happy, sad, angry, fearful), we developed and
tested an improved multimodal emotion recognition system to determine emotions from an input signal. Emotion recognition has applications in security,
gaming, user-computer interactions, and autism research. We used our previous year’s acoustical-feature based emotion recognition system to implement it in
different scenarios. Firstly, a feature optimization algorithm was developed and used to increase performance of the core emotion recognition system. We use
two modes of features: lexical and acoustical. We developed and implemented a real-time speech recognition system, and then used a part of speech tagger to
tag which words of an input signal were subjects, verbs, and objects, etc. We implemented the core system to create acoustical models for the different parts of
speech. In addition, we kept track of emotional words to determine the polarity of the input signal. For our real-time system, we implemented our previous work
to extract and classify emotions at three different lengths of data. We tested our static classification algorithm and compared it against last year’s system and it
rendered a 10-fold cross-validation performance increase from 65% to 78%.
Awards won at the 2011 ISEF
Fourth Award of $200 - Association for Computing Machinery
Award of $3,000 - China Association for Science and Technology (CAST)
Award to Travel to Trento, Italy to participate in summer school "Web Valley" - Foundazione Bruno Kessler
Second Award of $1,500 - Computer Science - Presented by Intel
2011 - CS035
IMPROVING HIGHWAY INTERCHANGES WITH COMPUTER-BASED EVOLUTION
Cassidy Meier Laidlaw
Barrington High School, Barrington, RI
This project used a computer-based evolutionary algorithm to search for possible improvements in highway interchange design. Computer programs were used
to “breed” and “mutate” interchanges, and then the best “offspring” interchanges were selected to “survive” to the next generation. By using computers to
quickly test and change many possibilities, interchanges different from those in use today were created.<br><br>Interchanges were modified during each
generation by moving pieces of them and by adding and deleting roads. The intersections that survived to the next generation were chosen by a fitness value
calculated as a combination of the time taken to get through the interchange, the building cost, and the percentage of cars that crash. These values were taken
from a simulation that modeled car behavior.<br><br>Because running experiments with large interchanges took a long time, simpler cases were also run. For
instance, the algorithm could produce a straight road from a wavy road, which allowed for lowered cost and less congestion. More complex interchanges
improved but not as quickly or as much as simpler cases.<br><br>This evolutionary-algorithm approach has shown potential to be a useful engineering tool.
Like biological evolution, the downside is the time it takes to produce useful results: several hours to a few days. However, it is easily adapted to different
situations. It could be used for a wide variety of applications and serve as a useful tool for engineering highways in the future.
2011 - CS065
QUADROCOPTER AERIAL MONOCULAR VISION FOR IMPROVED AUTONOMOUS ROBOT NAVIGATION
Kenny Zane Lei
Walnut High School, Walnut, CA
Conventional ground robot navigation and path finding is often inefficient and time-consuming, especially in a maze-like environment. Aerial vision, however,
provides a novel perspective in finding the path finding for robot navigation. Aerial vision in combination with ground robot was compared to solely ground robot
navigation for operational time. A ground robotics platform was based off an iRobot Create and laptop. Aerial vision was achieved through the Parrot AR.Drone
quadrocopter with a built-in camera. A laptop was connected to the camera feed of the quadrocopter via socket connections to its wireless network. Java
programming language was used for both quadrocopter control and image processing. The quadrocopter was initiated and hovered above the robot and maze
environment. Images acquired were initially processed to classify regions as either obstacle or traversable area. Start and end point regions were then
classified within the image. A breadth first search (BFS) algorithm was employed to determine the shortest navigational path that avoids obstacles. When a
traversable path between the detected start and end points is found, the ground robot is sent movement vector commands to navigate around the obstacles.
After a series of trial runs, the novel navigation yielded an average run time of 38.45 seconds while the conventional navigation resulted in an average run time
of 140.57 seconds. The addition of aerial vision from the quadrocopter resulted in a 72.6 percent improvement in operation time for the ground robot. These
findings demonstrate rich data provided from aerial imagery significantly enhances and improves robot navigation.
Awards won at the 2011 ISEF
Award of three $1,000 U.S. Savings Bonds, a certificate of achievement and a gold medallion. - United States Army
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
2011 - CS058
END OF COURSE EXAM ARRANGEMENT ACCELERATOR
I Hsun Liao
Americas High School, El Paso, TX
Months before End of Course (EOC) testing day, school administrators tediously arrange rosters of student and teacher. Teachers must give up their regular
teaching schedule to monitor test. Students that are not testing can cause troubles during school hours. This project will solve the problem by developing a
program that arranges the faculties and students on an EOC testing day using a user friendly interface and compare it to the amount of time it takes to do the
same process manually. If the program is used to do the process automatically, it will be more efficient than doing so manually. First a program generates
student and teacher information based on the real amount. A database is created and information is inserted. Teachers are arranged so they can use their
spare time to monitor the test. Teachers’ spare times are their conference periods and periods in which they teach the testing subject. To select teachers, they
will be divided into three groups: primary, secondary, and final. Primary teachers only teach the testing subject. Secondary teachers teach the testing subject
and other subjects. Final teachers do not teach the testing subject. Nine combinations with logical orders of these teachers are used to make the arrangement
using the least teachers. However it can still be improved. The program used less time to make the rosters than doing so manually (14.5048s versus 64800s),
so the hypothesis is not wrong. The algorithm developed can be used to do schedules for every occasion involving job shifts.
2011 - CS033
HIGH SCHOOL TIMETABLE GENERATION USING A SIMULATED ANNEALING METAHEURISTIC
Joshua Baruch Lipschultz
Niles North High School, Skokie, IL
Each year, high schools must undergo the time-consuming task of generating class schedules. This involves deciding when classes will occur and how
students are assigned to specific classes, with the goal of maximizing the number of requests satisfied and minimizing the number of conflicts. The purpose of
this project was to design, develop, and test a software program capable of computationally generating such a schedule using a simulated annealing
metaheuristic algorithm, an approximation algorithm often applied to similar complex combinatorial optimization problems. Developing the software involved
breaking the task down into its component parts, designing the data structures each would use, implementing each component in Python, and then testing and
debugging. The quality of the final schedule output was assessed by measuring the total number of schedule conflicts, and the final results were compared to
the number of conflicts present in the manually generated schedule used by Niles North High School for the 2010-2011 academic year. Every trial run of the
software generated a schedule with significantly fewer conflicts than that of the manual method, each in a fraction of the time the manual effort required. One of
the six trials generated a schedule with fewer than half the conflicts of the manual schedule. Although certain simplifications required in this initial design
preclude overly precise comparisons between the two methods, simulated annealing was shown to be a potentially highly effective method for generating high
school class schedules.
2011 - CS027
THE DESIGN AND IMPLEMENTATION OF A DIALECT OF SCHEME FOR PARALLEL PROCESSING ON THE GPU
Gregory Louis Manis
John F. Kennedy High School, Bellmore, NY
Scheme is a functional programming language in the Lisp family. Functional programming languages are a class of programming languages characterized by
the fact that all data are functions. Functional programming languages are descended from lambda calculus, a formal system of mathematical computation.
Scheme has a powerful macro system that allows the language to be easily expanded. Its homoiconicity and list based structure make handling data intuitive.
In this project, the Scheme programming language was studied in depth.
Awards won at the 2011 ISEF
Third Award of $300 - Association for Computing Machinery
Second Award of $500 - IEEE Computer Society
2011 - CS039
THE CM EQUINE FACIAL EVALUATION SYSTEM: A COMPUTATIONAL APPROACH TO EQUINE TEMPERAMENT ANALYSIS
Catherine Grace McVey
North Carolina School of Science & Mathematics, Durham, NC
Within the equestrian community there is a great deal of antiquated knowledge relating anatomical features of the equine face to aspects of
personality/temperament. This noninvasive behavioral evaluation technique offers equine professionals a distinct advantage in identifying horses cognitively
suited for success in today’s competitive equestrian disciplines. Methods for applying these techniques have traditionally been guarded as training secrets, and
as a result remain highly subjective, inaccessible, and scientifically unexplored. The purpose of this project was to bring objectivity and accessibility to this facial
evaluation technique via a user-friendly and statistically validated computational approach.<br><br>A test-retest methodology was first employed in a biascontrolled setting to evaluate the objectivity of facial classifications. All facial regions rejected the null hypothesis at the 2% significance level, and the facial
characteristics themselves were concluded to be both objective and quantifiable. A computationally inexpensive trigonometric approach was then developed to
produce quantitative measures (26) capable of accurately differentiating between these discrete facial features. The mathematical system was coded into an
interactive image-analysis interface in the MATLAB mathematical programming suite and applied to a sample of facial profiles from 81 national-caliber Arabian
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
show horses. The accuracy of these computed measures were statistically validated against real-world performance data in both a trinomial categorization
model capable of predicting riding discipline with 79% accuracy (using only four facial measures) and a four-variable linear combination model capable of
predicting win percentiles with statistically significant degrees of correlation.
Awards won at the 2011 ISEF
First Award of $1,000 and a plaque - American Veterinary Medical Association
Second Award of $1,500 - Air Force Research Laboratory on behalf of the United States Air Force
2011 - CS006
WRITTEN IDENTITY: A TEXTUAL ANALYSIS PROGRAM
Samantha Smith Moorin
duPont Manual High School, Louisville, KY
Abstract<br><br>The problem that was dealt with in this project was whether or not a computer program could guess an author using purely textual features.
The hypothesis is that computer program would be three times more likely to guess the correct author than a blind guess. <br><br>The procedures are as
follow: the results from the project done two years ago were analyzed. Then new variables were decided based on whether or not they would help the accuracy
of the project, new variables were researched in order to begin programming, and new the variables were then coded into the computer program. Next the
program was checked for any errors and all of the errors were fixed, the program was then put into web page format, and the authors to be tested were chosen.
After that, 10 excerpts with 125 sentences each were chosen for each author, the excerpts from the authors were analyzed, and all data was input into excel.
The results were then gathered and run through a 1-Proportion Z Test.<br><br>Not only was the hypothesis supported, but also the computer program is four
times more likely to guess an author than a blind guess. The results say with 93 percent confidence that the computer program can guess, on the first try, any
author with 50 percent accuracy. Seeing that a blind guess would have 12.5 percent accuracy, my computer program is 4 times more likely to guess the right
author than a blind guess.
2011 - CS037
COMPUTER SLEUTH: IDENTIFICATION BY TEXT ANALYSIS, YEAR TWO
Daniel William Moran
Minerva High School, Minerva, OH
Stylometric authorship attribution is the process of categorizing written work by the quantitative writing style of its author(s). Due to the immense number of
possible measurements to be made on writing style, it has become common for the field to employ computer programs designed to analyze the highdimensional data used in the process of authorship attribution. The purpose of previous research was to build such a program; the program built could
distinguish between two authors using passages of 1000 words with 88 percent accuracy. The purpose of this experiment was twofold: firstly to redesign the
existing program so that it uses permanent, dynamic tables for storage, is more efficient in taking measurements, and can choose from a dynamic group of
possible authors when determining authorship; and secondly to observe the relationship between the lengths of passages passed to the program and its
accuracy. It was hypothesized that if the lengths of the passages supplied to the program decreased, then the program would be less accurate. When choosing
between five possible authors, the new program achieved 64 percent accuracy when analyzing passages of 1500 words, 54 percent accuracy when analyzing
passages of 1000 words, and 44 percent accuracy when analyzing passages of 500 words. The hypothesis was proven correct by this data. Measurements
were also gathered from passages that could assist future researchers in increasing the accuracy of the program when dealing with multiple candidates for
passage authorship.
2011 - CS049
ENGAGING ELEMENTARY SPECIAL EDUCATION STUDENTS IN LEARNING MATHEMATICAL CONCEPTS THROUGH MULTIMODAL INTERACTION
SUBJECTIVE WORKLOAD AND ATTENTION LEVEL ANALYSIS THROUGH A BRAIN COMPUTER INTERFACE
Noor Rejah Muhyi
Las Cruces High School, Las Cruces, NM
The aim of this study is to evaluate two multimodal devices, Digital Mat and Dynamic Number Line, on elementary and kindergarten students in both general
and special education (SPED) classrooms. "eSence" brainwave readings (attention and meditation) were read using the NeuroSky MindSet, a brain computer
interface (BCI).<br><br> <br><br>A total of 46 students in special and general education classrooms were tested. The data that was recorded was the number
of correctly answered questions, EEG frequencies, eSence readings, average amount of time on each episode. Average task load using NASA Task-LoadIndex was also recorded. The control groups were compared to their assigned interface.<br><br>In conclusion, this study indicates that the use of technology
improves learning of mathematical concepts and thus supports the idea that multimodal devices can be important stipulations of learning to elementary and
kindergarten students, especially for SPED students.<br><br>Furthermore, this study supports the idea that the inclusion of bodily movement may not be
helpful to represent math concepts but that embodied cognition also aids their achievement, After four iterations, it can be concluded that these novel devices
decrease the learning time of the SPED students and increases the engagement of trhe classroom.
2011 - CS004
EYE-CONTROLLED CURSOR
Filip Naiser
Gymnazium Aloise Jiraska, Litomysl, CZECH REPUBLIC
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
This software allows the user to control the mouse cursor using only his eyes, which are scanned by an ordinary web camera. This camera is attached to the
head (for example, using a common baseball cap).<br><br>The primary goal of the project is to create a solution that provides disabled people a way to
communicate. It aims to create affordable, practical and applicable software.<br><br>I had to devise my own method of movement detection and analysis. First
versions compared the image of the eye to pre-defined calibration shots. Now the software precisely measures the direction of sight and moves the mouse
proportionally to the distance of the pupil from the center. It also recognizes when the eyes are closed, which triggers mouse actions, such as left click, right
click and drag & drop. It is possible to move the cursor in eight directions, which is suitable for controlling a software keyboard.<br><br>Eye-Controlled Cursor
has started to benefit people with physical disabilities. Its advantages over other products are lower costs of installation, possibility of using an ordinary PC and
ability to replace mouse and keyboard. It is also possible to find its application in the industry (device operation with no means of mouse and keyboard usage).
Awards won at the 2011 ISEF
Second Award of $1,500 - Computer Science - Presented by Intel
2011 - CS042
OPTIMIZING KEYBOARDS FOR PEOPLE WITH DISABILITIES
Natalie Janet Nash
Vincentian Academy, Pittsburgh, PA
People with disabilities that prevent them from speaking or using their hands have an extremely difficult time communicating. There are augmented and
alternative communication devices (AACs) that provide these people with the ability to communicate with others by typing out their thoughts on a keyboard.
However, because they cannot control their muscles, the device can use an innovative way to type that only uses one input switch. The user can chose each
letter of the desired sentence by hitting a switch with the one muscle they can control as the rows of keys are highlighted sequentially. Once a row is selected,
the keys in that row are then highlighted consecutively until the user selects the desired letter. This single-switch typing process can be extremely tedious, and
severely limits the user’s communication speed. It was hypothesized that an ambiguous keyboard, with more than one letter per key, could significantly
increase efficiency by lowering the number of scanning steps. In this project a word prediction algorithm was written along with a simulation of the process of
typing using single switch scanning. An optimization algorithm was also written to find the optimal, and most efficient key layout. The optimized keyboard layout
was found for three different keyboard types, QWERTY, ambiguous, and a commercially available keyboard. The optimized ambiguous keyboard layout
performs 13.3% better than the commercially available keyboards.
Awards won at the 2011 ISEF
First Award of $1,000 - Association for Computing Machinery
2011 - CS060
STICKY SOFTWARE ANALYSIS KIT
Jee-Heon Oh
Sunrin Internet High School, Seoul, SOUTH KOREA
Sticky Software Analysis Kit is a solution that can modify, analysis softwares easily.<br><br> Before, an EXE file are not thought as a target of modification.
There were some solutions that help reverse-engieering but they were not good enough. To do reverse engineering, you have to learn so many technical
knowledges of target flatform(OS, CPU architecture), programming techniques, APIs, Anti-reversing routines, and so on. And there are no solutions that can
save(apply) the result of reversing.<br><br> For solving these(make analysing and modification easily), I researched a lot(details in research plan). And I
divided my project as two softwares.<br><br> The first is Sunion.<br><br> It solves result-saving issue perfectly. You can add functions, resources to EXE file
with Sunion. <br><br> Even to reverse engineers, it was almost impossible to ADD codes to compiled executables. They only can replace codes in radical
limitations.<br><br> Developers can develop already-compiled EXE files have no source code with C++ language by Sunion. And reverse engineers can apply
their analysis result easily and also add functions. <br><br> The second is Sticky Analyser.<br><br> This is an analyser for Sunion used for analyse EXE file.
<br><br> Sticky analyser is designed to improve accessibility of analysing. Functions provided by Sticky almost don't need technical knowledges. Beginners can
bypass any of Anti-reversing routines(Sticky analyser has its own debugging engine bypasses all of anti-debugging routines), find code location with automatic
finding function (There was no automatic functions before), run assembly code by interpreter (Nothing such as it exists).
2011 - CS047
INCREASE IN SPEED OF INTERPROCESS INTERACTION IN MICROSOFT SINGULARITY
Gadzhi Shamil'evich Osmanov
Lyceum #572, Center of Mathematical Education, Saint Petersburg, Saint Petersburg, RUSSIA
Singularity is a new experimental Microsoft OS, based on type-safe languages and manageable code. The core of the OS is the system of Software Isolated
Processes (SIPs), and almost all user- and system-level functionality (including hardware drivers) is performed in such SIPs. Communication between SIPs is
carried out through safe higher-level channels, controlled by contracts. However, this safe architecture considerably limits performance of the OS.<br><br>In
this research a new technology – Contract Shared Memory (CSM) – is proposed for the OS, which provides both safety and efficiency. Special CSM channels
are introduced to provide high-performance communication between SIPs. The channels are supervised by a contract, which provides sufficient safety for the
exchange operations; however, the actual data exchange is performed through type-safe shared memory.<br><br>The CSM technology was incorporated into
Singularity, its support was added to ABI, and to disk drivers. After this performance tests were conducted. The tests demonstrated serious speed increase (4.9
times) of programs which use CSM. The technology was also tested on built-in web server. Rewriting of important web server parts using CSM also increased
file:///C/Users/SheilaKing/Documents/Abstracts/Society%20for%20Science%20&%20the%20Public%20-%20Computer%20Science.html[8/23/2015 6:04:59 PM]
Society for Science & the Public - Page
its speed by about 5 times.
Awards won at the 2011 ISEF
Second Award of $1,500 - Computer Science - Presented by Intel
2011 - CS014
HOME SWEET 127.0.0.1
Austin Alexander Peavy
Alfred M. Barbe High School, Lake Charles, LA
The goal of this project is to determine whether or not it is possible to make two fully playable and complete games—one utilizing nothing but text, kind of like a
command line, and one utilizing graphics—made purely out of HTML and the bare basics of JavaScript.<br><br>My hypothesis is that I will indeed be able to
create both of the games that I have in mind using nothing but HTML and the bare basics of JavaScript. The potential is there, although it would be much
harder than doing it with intermediate-to-advanced JavaScript.<br><br>To make the games work, I needed to set up the JavaScript for each game. For the
text-based game, I checked for what the user input on each part of the game and made it respond to whatever the user input. For the graphics-based game, I
made the image change on advancement and the scenario and buttons change as well.<br><br>My hypothesis was incorrect. I have confirmed that yes, it is
indeed possible to create a fully playable and complete game in JavaScript using only the very basic concepts (statements, methods, etc.) and HTML.
However, I was incorrect in saying the graphics-based game would be harder--it was the easier of the two.
2011 - CS311
DYADIC INTERACTION ASSISTANT FOR TRACKING HEAD GESTURES AND FACIAL EXPRESSIONS
Varun Ramesh, Shantanu Bala,
Hamilton High School, Chandler, AZ
When humans interact with each other, they utilize multiple different channels of communication, including verbal communication, facial expressions, and
gestures. Approximately 46% of the information delivered in interpersonal communication is solely visual, leaving people with visual impairments at a great
disadvantage in social environments, often leading to isolation. The goal of this project is to create a novel assistive device - the Social Interaction Assistant that can provide people with visual impairments the ability to access visual social cues such as the emotions of the people they are conversing with. The Social
Interaction Assistant automatically recognizes different facial indicators of a person’s emotional state and conveys such information discretely and accurately in
real-time using haptic (vibration) feedback. The Social Interaction Assistant consists of a camera paired with face detection technology to automatically pan and
tilt to track faces and keep them within its field of vision. The video stream from the camera is analyzed in real-time to detect features of the faces such as the
curvature of the eyebrows and mouth, in order to determine the emotional state of a subjects face. For this project, a haptic glove was created from scratch
(dubbed the VibroGlove). A microcontroller allows the software to vibrate the glove in various patterns corresponding to different visual cues. The device
accurately detects and tracks faces with close to 95% accuracy under certain lighting, and can accurately detect various facial features such as the state of the
eyebrows and mouth. The software runs at approximately 10 frames per second - enough for what is required for a conversation.
Awards won at the 2011 ISEF
Team Second Award of $400 for each team member - IEEE Computer Society
Fourth Award of $500 - Comp