Genetic Systems Programming: Theory and Experiences

Gene expression programming
Free download. Book file PDF easily for everyone and every device. You can download and read online Genetic Systems Programming: Theory and Experiences file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with Genetic Systems Programming: Theory and Experiences book. Happy reading Genetic Systems Programming: Theory and Experiences Bookeveryone. Download file Free Book PDF Genetic Systems Programming: Theory and Experiences at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF Genetic Systems Programming: Theory and Experiences Pocket Guide.

EMI starts at Rs. No Cost EMI available? Item not available at this location, please try another pincode. Delivery in days Free hrrhrhrhhr Delivery Charges: Rs. Additional Handling Charges are levied for other expenses incurred while delivering to your location. More Delivery Options. Delivery in days. Free Delivery Charges: Rs.

Shipping Charges : Rs. We will let you know when in stock. Thank you for your interest You will be notified when this product will be in stock. Replacement is applicable for 7 days after delivery Know More.

Computer Science (CS)

I agree to the. Terms and Conditions. How It Works? IMEI Number. Exchange Discount Summary Exchange Discount -Rs. Final Price Rs. Apply Exchange. Other Specifications. On the Back Cover Designing complex programs such as operating systems, compilers, filing systems, data base systems, etc. The images represent actual product though color of the image and product may slightly differ. Was this information helpful to you?

Yes No. Thank You for submitting your response. Customer Product Selfies. Submit Search. Successfully reported this slideshow. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

  • The Theatre of Civilized Excess: New Perspectives on Jacobean Tragedy. (Costerus New Series).
  • Asterix and Caesars Gift.
  • Odd Jobs: 101 Ways to Make an Extra Buck.
  • Citations per year.
  • Synchrotron Light Sources and Free-Electron Lasers: Accelerator Physics, Instrumentation and Science Applications.

Experiences with logic programming in bioinformatics. Upcoming SlideShare. Like this presentation? Why not share! Embed Size px. Start on. Show related SlideShares at end. WordPress Shortcode. Chris Mungall , Bioinformatics Follow. Published in: Technology. The closure operation supplies the pattern set with some instances of the original patterns. The closure operation terminates and can be computed incrementally for full detailed description and formal proofs see [7].

Under this assumption, an important property of such closed pattern sets is that they make it possible to determine whether a target term is a redex merely by scanning that term from left-to-right without backtracking over its symbols. Then the term will match the second pattern. When the adaptive order coincides with the left-to-right order, matching items and matching sets coincide with left-to-right matching items and sets respectively [14]. A matching item is a pattern in which all the symbols already matched are ticked, i. The position of the matching symbol is called the matching position.

Final matching item may contain unchecked positions. Matching items are associated with a rule name. The term obtained from a given item by replacing all the terms with an unticked root symbol by the placeholder is called the context of the items. In fact, no symbol will be checked until all its parents are all checked. So, the positions of the placeholders in the context of an item are the positions of the subterms that have not been checked yet.

The set of such positions for an item i is denoted by up i short for unchecked positions. A matching set is a set of matching items that have the same context and a common matching position. For initial matching sets, no symbol is ticked. Furthermore, the rule associated with that item must be of highest priority amongst the items in the matching set.

Since the items in a matching set M have a common context, they all share a common list of unchecked positions and we can safely write up M. The function accept and close are similar to those of the same name in [14], and choose picks the next matching position. Let t be a term in which some symbols are ticked. For a matching set M and a symbol s, the choose function selects the position 88 Nadia Nedjah and Luiza de Macedo Mourelle which should be inspected next. In general, the set of such positions is denoted by up M, s.

Therefore, the positions of these terms are added. Recall that if a function symbol f is at position p in a term, then the arguments of f occur at positions p. As it is shown in 4. The choice of the positions to inspect will be explained next section. Notice that for each of the matching sets in the automaton, the items have a common set of symbols checked. Accepting the function symbol f from the initial matching set i.

This is because the former is relevant whereas the latter is not. We conclude by prescribing a good traversal order for patterns that are not essentially strongly sequential. The normalisation of subterms occurring at indexes is necessary. So, safer adaptive automaton w. Example of such priority rule include textual priority rule which is used in most functional languages.

For equational programs that are not strongly sequential [3], there are some matching sets with no index. A partial index is an index for a maximal number of consecutive items in a matching set with respect to pattern priorities. Another way consists of associating a weight to each pattern and another to each position in the patterns. A pattern weight represents the probability of the pattern being matched whilst a position weight is the sum of the probability of the argument rooted at that position leading to a non-terminating reduction sequence.

At run-time, the sum of the products of the weight of each pattern and the weight of position p determines the probability of having a non-terminating sequence at p. One may consider to minimise such a sum for partial indexes to improve termination. It may also reduce the breadth of the matching automaton as the closure operation adds only few patterns.

The motivation behind choosing such partial indexes is that we believe patterns of high priority are of some interest to the user. If a matching set has more than one index then the index with a minimal number of distinct symbols must be chosen. This because all the positions are indexes and so must occur in every branch of the subautomaton.

Assume that p and q are two indexes for a matching set M and labelled in the items of M with n and m distinct symbols respectively where n 92 Nadia Nedjah and Luiza de Macedo Mourelle the breadth of the automaton. When no index can be found for a matching set, a partial index can be selected. If more than one partial index is available then some heuristics can be used to discriminate between them.

A good traversal order should allow a compromise between three aspects. These are termination, code size and matching time. However, for non-sequential rewriting systems, this not always possible for all patterns. So a good traversal order for non-sequential systems should maximise this property. The code size of associated with a given traversal order is generally represented by the number of states that are necessary in the corresponding adaptive matching automaton.

A good traversal order should minimise the code size criterion. Finally, the last but not least characteristic of a traversal order is the necessary time for a pattern match to be declared. In general, the matching time with a given traversal order is the length or number of visited positions in the longest path of the tree that represent that traversal order. A good traversal order should minimise the matching time.

Genetic programming constitutes a viable way to obtain optimal adaptive automata for non-sequential patterns. In the evolutionary process, individuals represent adaptive matching automata for the given pattern set. Starting form random set of matching automata, which is generally called initial population, in this work, genetic programming breeds a population of automata through a series of steps, called generations, using the Darwinian principle of natural selection, recombination also called crossover, and mutation.

Crossover recombines two chosen automata 4 Evolutionary Pattern Matching 93 to create two new ones using single-point crossover or two-points crossover as shown in next section. Mutation yields a new individual by changing some randomly chosen states in the selected automaton. A state is the pair formed by a matching set and its matching position. For instance, the matching automaton of Figure 4.

Final states are labelled with the number of the rule whose pattern has been matched. Note that internally the traversal order of Figure 4. Internal representation of the traversal order of Figure 4. The traversal order of Figure 4. It may be single-point or double-point or uniform crossover [12]. Mutation acts on a chosen node of the traversal order tree.

In the former case the inspected position is replaced by another possible position while in the latter one, a rule number is randomised and used. In other terms, the function evaluates if the inspected positions for each pattern are valid for that pattern. Moreover, if position 3. Note that for fully valid traversal orders, the soundness factor is zero. Initially, this parameter is the empty set. In general, termination is compromised when the traversal order inspects positions that are not indexes for the patterns in question.

Note that traversal orders that allow terminating rewriting sequences for all patterns have a termination factor zero. However, this is only possible for sequential patterns. Here, we deal with the more complicated case non-sequential patterns. As state before, the code size of a matching automaton is proportional to the number of its states.

For a given pattern set, the time spent to declare a match of one the patterns is proportional to the number of position visited. This can be roughly estimated by the length or number of position in the longest path in the traversal order. There are several approaches. Here, we use an aggregation selection-based method to solve multi-objective optimisation problems.

Connect With #UWAllen

It aggregates the multiple objectives linearly into a single objective function. For this purpose, it uses weights to compute a weighted sum of the objective functions. The optimisation problem of adaptive matching automata has 4 objectives, which are the minimisation of soundness S , termination T , code size C and matching time M. The is transformed into a single objective optimisation problem whose objective function is as 4. Therefore, to put more emphasis on valid adaptive matching automata, we give more importance to the soundness objective. The remaining objectives are of equal importance.

It consists of choosing a vector of weights and performing the optimisation process as a single objective problem. As a single or limited number of solutions, hopefully Pareto-optimal [19], is expected from such process, it is necessary to perform as many optimisation processes as the number of required solutions [16]. In our experiments, we used the [0. The pattern matching automata for some known problems were used as benchmarks to assess the improvement of the evolutionary matching automata Nadia Nedjah and Luiza de Macedo Mourelle with respect to the ones designed using known heuristics.

The Kbl benchmark is the ordinary three-axiom group completion problem. The Comm benchmark is the commutator theorem for groups. For each of the benchmarks, we built the both matching automata, i. This should provide an idea about the size of the automaton. Furthermore, we obtained the evaluation times of a given subject term under our rewriting machine for details see [14] using both matching automata as well as the evaluation times of the same subject terms under the system HIPER.

The space and time requirements are given in Table 4. Table 4. We also gave some heuristics that allow the engineering of a relatively good traversal order. In the main part of the chapter, we described the evolutionary approach that permits the discovery of traversal orders using genetic programming for a given pattern set. For this purpose, we presented how the encoding of traversal orders is done and consequently how the decoding of an evolved traversal order into the corresponding adaptive pattern-matcher.

We evaluated how sound is the obtained traversal. The optimisation was based on three main characteristics for matching 4 Evolutionary Pattern Matching automata, which are termination, code size and required matching time. Finally, we compared evolutionary adaptive matching automata, obtained for some universal benchmarks, to their counterparts that were designed using classic heuristics. Cooper and N. Dershowitz and J. Field and P. A Goguen and T.

Hudak and al. Koza, Genetic Programming. MIT Press, Miller, P. Thompson and T. Nedjah, C. Walter and S. Mourelle Eds. Rouge, Lausanne, Sekar, R. Ramesh and I. Peyton-Jones, Prentice-Hall International, pp. In this chapter attention is focused on using GP to make data collected in large databases more useful and understandable. Successive sections are dedicated to solving the above mentioned problems using the GP paradigm. The chapter begins with a short introduction to Data Modelling and Genetic Programming.

The last section recapitulates the chapter. A number of approaches that exploit the biological evolution ability to adaptation arise. Kwasnicka and E. A computer program solving the problem is evolved, what means — it is developed automatically. Discovering knowledge from possessed data and making it useful is very important especially for managers. The chapter, beside Introduction and Summary, consists of the three main sections. Section 5.

Other important problems are connected with prediction and time series modelling. GP can also be used for these tasks.

  • Computer Science and Engineering (CSE).
  • Plane and Solid Geometry!
  • Recommended for you.
  • Easy Wheat, Egg and Milk Free Cooking.
  • Courses in Computer Science and Engineering | Computer Science & Engineering?
  • Knit This Doll! A Step-by-Step Guide to Knitting Your Own Customizable Amigurumi Doll!

The third main section Section 5. Some conclusions are placed in the last section — Summary Section 5. The search space is constrained only by the available model pieces variables, functions, binary operators, syntax rules. Furthermore, GP is not obligated to include all input variables in the equation, so it can perform a kind of dimensionality reduction. But the key issue to make GP algorithm work is a proper representation of a problem, because GP algorithms directly manipulate the coded representation of equation, which can severely limit the space of possible models. An especially convenient way to create and manipulate the compositions of functions and terminals are the symbolic expressions S-expressions of the LISP programming language.

It allows building expression trees out of these strings, that can be easily evaluated then. Functions and terminals are adequately nodes and leaves of the trees being evolved by the GP.

Using arithmetic functions as operators in GP has some drawbacks. One of them is that the closure property of GP requires, that each of the base functions is able to accept, as its arguments, any value that might possibly be returned by any base function and any value that may possibly be assumed by any terminal. Similar protections must be established for many other functions as e.

Designing GP algorithm to mathematical modelling it is necessary to consider most of these problems. The work was done as a part of wider study of Szpunar-Huk, described in [24]. Values x1 , For both functions, the closer the sums are to zero, the better the expressions solutions are.

In the selection phase couples of individuals for crossover are chosen. As a selection method here is proposed the method based on elite selection. A reason for this is that in GP during crossover phase expressions cannot evolve well because they are frequently destroyed and it is easy to loose the best ones, what can result in poor performance. After selection a number of couples according to a given probability parameter are copied to the next generation unchanged, the rest individuals undergo the crossover — a random point is chosen in each individual and these nodes along with their entire sub-trees are swapped with each other, creating two entirely new individuals, which are placed in the new generation.

Here, in the recombination operation another parameter is also introduced. Mutation is an additional included genetic operator. During mutation operation nodes or leafs are randomly changed with a very little probability, but over a constraint, that the type of node remains unchanged: a terminal is replaced by an another terminal and a function by an another function. An initial population is randomly generated, but creating a tree with maximum depth is more likely to come than creating a tree with only one or just few nodes.

Experiments To validate described method, some numerical experiments were conducted. In this case some sample sets were generated according to a given function f , the algorithm was looking for this function, but a sample could contain more variables than it was used by the given function. The algorithm turned out to be a good tool to exactly discover plenty of functions. Table 5. In a number of experiments the vector of input variables xi contains a variables that are not taken into account during output calculation, they are additional variables. Exemplary results are shown in table 5.

It is important to mention that the algorithm requires Table 5. Some of them are mentioned above. The most important of them with their presumed values are listed below. They are suggested values, often used in the experiments, but in some tasks their values must be carefully tuned. That is why the algorithm is cold Tuning Algorithm TA.

An initial population of the TA contains sequences generated on an expression taken from the GP algorithm and random sequences — one hundred individuals altogether. Generated in such a way population is evolved in the standard way: individuals are selected, crossed over and mutated. Mutation however is much more important here than in GP algorithm.

A special kind of mutation is proposed. Each real number is a set of Arabic numeral, e. An example of crossover in TA ple, when the dataset was generated using equation 5. Create an initial GP population and set GN number of generations to 0 2. While termination criteria not met do 4.

Take S individuals for tuning 6. For each of S chosen individuals do 7. Create an initial GA population 8. While termination criteria not met do Probabilistically apply crossover operator Probabilistically apply mutation operator Go to step 8 Increase GN Go to step 2 In practice, the observed data may be noisy and there may be no known way to express the relationships involved in a precise way. That is why GP is widely used to solve the problem of discovering empirical relationships from real observed data. To show, how wide variety of domains GP is successfully applied some of experiments with GP on real data are mentioned below.

Sugimoto, Kikuchi and Tomita were able, by applying some improvements to GP, to predict an equation from time course data without any knowledge concerning the equation. They predicted biochemical reactions and the relative square error of predicted and given time-course data were decreased, due to GP, from Langdon and Barrett have used Genetic Programming GP to automatically create interpretable predictive models of a small number of very complex biological interactions, which are of great interest to medicinal and computational chemists, who search for new drug treatments.

Particularly, they have found a simple predictive mathematical model of human oral bioavailability [7]. Makarov and Metiu use GP to the analytic solutions of the time-independent Schrodinger equation.

Duplicate citations

They tested their method for a one-dimensional enharmonic well, a double well, and a two-dimensional enharmonic oscillator [8]. Diplock, Openshaw i Turton used GP as a tool for creating new models of complex geographical system. They run parallel GP algorithm on Cray T3D supercomputer to create new types of well performing mathematical model [9]. The resulting equations can then be tested with statistical methods to examine their ability to predict the phenomenon on unseen data.

Given a set of predetermined, disjoint target classes C1 , C2 , The popular decision model is also a decision tree, in which the leaves are the class labels while the internal nodes contain attribute-based tests and they have one branch emanating for each possible outcome of the test. Most rules induction e.

There are several properties of GP, which make them more convenient for application in data mining tasks comparing to other techniques. The reason is that the local, greedy search selects only one attribute at a time, and therefore, the feature space is approximated by a set of hypercubes. In real-world applications, the feature space is often very complex and a large set of such hypercubes might be needed to approximate the class boundaries. GP algorithms perform global search. That is why it copes well with attributes interaction problem, by manipulating and applying genetic operators on the functions.

Additionally some data mining algorithms work only with qualitative attributes, so numerical values must be divided into categories. GP can deal with any combination of attribute types. They have the high degree of autonomy that makes it possible to discover of knowledge previously unknown by the user. One idea to solve the problem is to use logical operators for all inner nodes, while leaves have only boolean values or are represented as functions that return Boolean values.

Other approaches to go around the closure requirements are STGP, mentioned in section 5. It should be underlined, that the success of evolution is dependent upon the use of a language, that can adequately represent a solution to the problem being solved. Michigan — Pittsburgh Approaches The next step designing a GP for discovering decision model, after choosing a proper language, is to choose individual representation.

The choice of individual representation determines the run of GP algorithm. The most common solution for this in the literature is to run the GP k times, where k is the number of classes [12]. When GP is searching for rules for a given class, all other classes are merged into a large class.

Noda et al. As an example we propose research of Ross, Fueten and Yashir [16]. GP algorithm is used by Ross et al. These parameters were obtained by multiple-stage process of mineral sample images analysis. Such a tree, given a sample of data for a grain, can determine, if the grain corresponds to the mineral, the tree is engineered to recognise. Thus decision model consists of several decision trees, one for each mineral species. The testing set was the rest of the database not used for training.

The authors noticed that the language L2, which is more complex than L1, did not lend any great advantages to the quality of Halina Kwasnicka and Ewa Szpunar-Huk solutions but is computationally more expensive due to arithmetic operators. They proposed a new constrained-syntax Genetic Programming and reported, that the algorithm can classify such data well.

Other example could be one of the newest researches made by Bojarczuk et. Another interesting work was presented by Bentley, who used GP to evolve fuzzy logic rules capable of detecting suspicious home insurance claims. Usually databases considered in data mining DM tend to have gigabytes and terabytes of data. The evaluation of individual against the database is the most time consuming operation of the GP algorithm. This large amount of stored data potentially contains valuable, hidden knowledge.

This knowledge could be probably used to improve the decision-making 5 Genetic Programming in Data Modelling process. For instance, data about previous sales of a company may contain interesting relationships between products and customers, what can be very useful to increase the sales. In medical area, the probability of presence of particular disease in a new patient could be predicted by learning from historical data. In prediction task the database often has a form of time series, since data are collected daily, and intuitively in many areas the order of data is of great importance.

It causes that a Genetic Programming becomes popular in prediction task too. GP algorithms become more complicated when learning database contains time series. Achieving good results however often demands further improvements. These can be simple arithmetic operations like an average or maximum value of attribute, which have an implicit time range, more complex like the function: p attr,num , representing the value of attr for the day num before the current day being evaluated, or some typical for a given problem including characteristic domain knowledge.

Desirable property of such a rule is its predicting power in assigning predicting the class of the new object. Sometimes test data set is given in advance, but when there is only one data set available some techniques for prediction testing are necessary. The most commonly one is N-fold cross validation. This method divides the Halina Kwasnicka and Ewa Szpunar-Huk data set to N mutually exclusive data sets of the same size. In each step of total N steps of the algorithm, one sub-data set is used for the testing, and the rest of them for learning process. In the learning part of the algorithm, the goal attribute values are available, while in the testing part these values are used for evaluation of the predicting ability of the rule.

The database that was used in experiments came from Warsaw Stock Exchange and consists of 27 major indexes of stock markets. The sample period for the indexes runs from April 14, until September 7, The original series are sampled at daily frequency. The sample periods correspond with observations.

Database contains indexes for join-stock companies. An individual represents a decision model, which consists from 2 trees. One tree determines, if given stocks should be bought, and the other, if stocks should be sold. If the value Table 5. The course of the algorithm Fig.

Firstly a random population is generated and if termination criteria are not met, evaluation, selection, crossover and mutation operations on individuals are performed. Characteristic element of the evolution process is the evaluation phase, which 5 Genetic Programming in Data Modelling Fig. Create an initial GP population 2. While termination criteria not met do: 3. For each session from database do from the oldest to the newest one : 5. For each join-stock company C do: 6. For each individual I do: 7.

If I possesses stock of company C and I wants to sell V: 8. Sell all stock from portfolio of individual Make ranking of individuals Move N best individuals to the next generation unchanged Go to step 2. An individual is allowed to speculate on the stock exchange by buying and selling stocks securities sequentially starting with an empty portfolio for data from each session in database from the oldest to the newest session.

Finally, all stocks are sold and ranking of individuals is performed according to gain or loss given strategy brought. Except from the time period T3 average results are better than index WIG. But the strategies are far away from allowing gain during bear market. The best one seems to be the 9-th strategy, which has quite simple and short notation: Selling tree: MACD12 3. Sunspots appear as dark spots on the surface of Sun. They typically last for several days, although very large ones may live for several weeks.

It is known for over years that sunspots appear in cycles. The average number of visible sunspots varies over time, increasing and decreasing on a regular cycle of between 9. The number of sunspots during solar maximum various for cycle to cycle. They are collected for about years. Changes in the sun activity are shown in Fig. Knowledge of solar activity levels is often required years in advance, e.

Connect With #UWAllen

General way in which training and test data were generated is presented in Fig. Firstly the learning time period t0 , t1 , the number of attributes n, and time period length d1 is determined. Attribute values are sum of sunspots appearing during time period of length d1. Additionally distance p, Halina Kwasnicka and Ewa Szpunar-Huk denoted distance between time periods used to value of nth attribute and y, is set.

Data preprocessing In short term tests approximation turned out to be very good.

Evolution was led during generations. The best approximation for the given values of parameters presented in Fig. Average percentage error obtained during this experiment was equal Increasing number of attributes to and lengthening the learning time period could probably improve predicting abilities, but in the case of solar activity, it is not possible due to lack of data collecting in previous centuries.

Described experiments showed that GP algorithm can perform well in realdata, time series modelling. Sunspot approximation — short-time test Fig. The best approximation of sunspots — long-time test Each approach demanded introducing of special elements to the standard GP algorithm. As an example could be mentioned Bhattacharyya, Pictet, and Zumbach work, who induce trading decision models from high-frequency foreign exchange FX markets data.

They suggested incorporation of domainrelated knowledge and semantic restriction to enhance GP search. He transformed and normalised the database for better performance of evolution process [23]. They have demonstrated that models can be developed for the non-linear dynamics of phytoplankton, both as a set of rules and as mathematical equations [23].

The problems are not new ones, but still there are not commonly accepted methods for solving them. Genetic Programming is one of valuable approaches for them. In the mathematical modelling, described in section 5. The same method used for time series modelling section 5. We do not know, if there exists a solution of this problem, it is possible that this problem cannot be solved on the basis of earlier data or proposed representation of individuals. As it is known, the GP is able to solve real problems — see work of Koza, e. But, as it is characteristic for all metaheuristics, every using of such approaches must be very carefully tuned for the solved problem.

MIT Press 3. Koza JR , A genetic approach to econometric modelling. Pergamon Press, Oxford 4. Montana DJ , Strongly typed genetic programming. Evolutionary Computation 3 2 5. Genome Informatics 7. Springer 8. Springer-Verlag Computational Integration for Modelling. Koza JR , Concept formation and decision tree induction using the genetic programming paradigm.

Springer-Verlag, Berlin , Freitas AA , A survey of evolutionary algorithms for data mining and knowledge discovery. Ghosh A, Tsutsui S. Advances in Evolutionary Computation. Machine Vision and Applications , SpringerVerlag Comput Geosci, Pergamon Press , Genetic and Evolutionary Computation Conf. Kippenhahn R , Discovering the Secrets of the Sun. Ecological Modelling Szpunar E , Data mining methods in prediction of changes in solar activity in polish. Myszkowski PB , Data mining methods in stock market analysis in polish. Empirical results reveal that Genetic Programming techniques are promising methods for stock prediction.

Finally formulate an ensemble of these two techniques using a multiobjective evolutionary algorithm. Results obtained by ensemble are better than the results obtained by each GP technique individually. The process behaves more like a random walk process and time varying.

The obvious complexity of the problem paves way for the importance of intelligent prediction paradigms.

Nombre de citations par an

Discussion on problems of current research interest in computer security. Online Database Analytics Applications 4 Database, data warehouse, and data cube design; SQL programming and querying with emphasis on analytics; online analytics applications, visualizations, and data exploration; performance tuning. Added Author:. Prerequisites: CSE A; or basic courses in programming, algorithms and data structures, elementary calculus, discrete math, computer architecture; or consent of instructor. No credit to students who have completed CSE Robotics has the potential to improve well-being for millions of people, support care givers, and aid the clinical workforce. Exact syllabus varies.

During the last decade, stocks and futures traders have come to rely upon various types of intelligent systems to make trading decisions [1], [2], [7]. Grosan and A. Through an investment in Nasdaq index tracking stock, investors can participate in the collective performance of many of the Nasdaq stocks that are often in the news or have become household names.

It is used for a variety of purposes such as benchmarking fund portfolios, index based derivatives and index funds. Neural networks are excellent forecasting tools and can learn from scratch by adjusting the interconnections between layers. In Section 6. Some conclusions are also provided towards the end.

For both indices, we divided the entire data into almost two equal parts. No special rules were used to select the training set other than ensuring a reasonable representation of the parameter space of the problem domain.