Analysis of Artificial Brain

 

Divya Bharti

Birla Institute of Science and Technology, Pilani

*Corresponding Author E-mail: divyabharti171990@gmail.com

 

Abstract:

Artificial intelligence has captured all the activities of our daily life, it not only solve our social problems but also helps in the development of a country. From agriculture to Robot Technology (RT), AI is the key for development. AI basically works on patterns, learning from past experiences and then implementing them, which sometimes add as a constraint. So, in this paper we will see how we can develop Artificial Brain where without the past experience also machine will be able to take logical decision.

 

KEY WORDS: Peripheral ganglia, Hypothalamus,

 

1. INTRODUCTION:

1.1 Parts of Brain:

The human brain is the center of the human nervous system. It has the same general structure as the brains of other mammals, but is larger than expected on the basis of body size among other primates. Estimates for the number of neurons (nerve cells) in the human brain range from 80 to 120 billion. Most of the expansion comes from the cerebral cortex, especially the frontal lobes, which are associated with executive functions such as self-control, planning, reasoning and abstract thought. The portion of the cerebral cortex devoted to vision is also greatly enlarged in human beings, and several cortical areas play specific roles in language, a skill that is unique to humans.

 

 

Fig. 1.1 The brain

 

1.     Cerebrum: Largest part of the brain; composed of hemispheres. Controls conscious activities--intelligence, memory, emotions, voluntary muscle movement, personality.

2.     Corpus Callosum: Connects right and left cerebral hemispheres and maintains communication between them.

3.     Cerebellum: Located in the back of the brain. Regulates posture, balance, muscle tone; coordinates voluntary muscle movement.

4.     Brain Stem: Composed of the pons, midbrain and medulla oblongata. Pons and midbrain act as pathways to different parts of the brain. The medulla Oblongata controls in voluntary activities such as breathing, heart rate, blood pressure.

5.     Thalamus: Main site of sensory processing

6.   Hypothalamus: Controls many activities relating to homeostasis such as body temperature, breathing rate, feeling of thirst, hunger. Works with the medulla oblong

7.     Pituitary Gland: Master Gland", releases hormones that regulate other endocrine glands

8.     Pineal Gland: Light-sensitive gland that regulates daily biorhythms and reproductive cycles.

 

1.2 Basic Building Block of Brain- Neuron:

A neuron (also known as a neurone or nerve cell) is an electrically excitable cell that processes and transmits information by electrical and chemical signaling. Chemical signaling occurs via synapses, specialized connections with other cells. Neurons connect to each other to form neural networks. Neurons are the core components of the nervous system, which includes the brain, spinal cord, and peripheral ganglia. A number of specialized types of neurons exist: sensory neurons respond to touch, sound, light and numerous other stimuli affecting cells of the sensory organs that then send signals to the spinal cord and brain. Motor neurons receive signals from the brain and spinal cord, cause muscle contractions, and affect glands. Interneurons connect neurons to other neurons within the same region of the brain or spinal cord.

 

A typical neuron possesses a cell body (often called the soma), dendrites, and an axon. Dendrites are thin structures that arise from the cell body, often extending for hundreds of micrometres and branching multiple times, giving rise to a complex "dendritic tree". An axon is a special cellular extension that arises from the cell body at a site called the axon hillock and travels for a distance, as far as 1 m in humans or even more in other species. The cell body of a neuron frequently gives rise to multiple dendrites, but never to more than one axon, although the axon may branch hundreds of times before it terminates. At the majority of synapses, signals are sent from the axon of one neuron to a dendrite of another. There are, however, many exceptions to these rules: neurons that lack dendrites, neurons that have no axon, synapses that connect an axon to another axon or a dendrite to another dendrite, etc.

 

All neurons are electrically excitable, maintaining voltage gradients across their membranes by means of metabolically driven ion pumps, which combine with ion channels embedded in the membrane to generate intracellular-versus-extracellular concentration differences of ions such as sodium, potassium, chloride, and calcium. Changes in the cross-membrane voltage can alter the function of voltage-dependent ion channels. If the voltage changes by a large enough amount, an all-or-none electrochemical pulse called an action potential is generated, which travels rapidly along the cell's axon, and activates synaptic connections with other cells when it arrives.

 

With the exception of neural stem cells and a few other types of neurons, neurons do not undergo cell division. In most cases, neurons are generated by special types of stem cells. Astrocytes, a type of glial cell, have also been observed to turn into neurons by virtue of the stem cell characteristic pluripotency. In humans, neurogenesis largely ceases during adulthood—only for two brain areas, the hippocampus and olfactory bulb, is there strong evidence for generation of substantial numbers of new neuron.

 

Some recognition characters permit to identify the event and to drag the system reaction suitable and precise. The system distinguishes elements of the environment while symbolizing them by many predefined characters specified by ontology. It is constructed with a mediating module between environment and subsystem of action, but it remains a solver of problems. The central problem is the fusion of data, achieving the loop "event, recognized symbols, structure of such symbols, interpretation, action, event again". These systems are, as for their construction, totally decomposable. The two types of systems, reactive and perceptive, solve the well-stated problems and they are constructed typically for that. Their possible property of training is of type backing, in view to improve their reactive performance, this one tackling the gap between real event and recognized one. But it exists a third type of system, of a very different nature. For these new systems, inspirited of the living ones, it is not anymore the main question to solve well-known problems in formal ways, but rather to experiment possibilities of self-organization of some of their components, expressing problems they formulate for themselves, and treat in their own. Such a system, when it copies the behavior of a living organism, must have the same reason to adopt some behavior that the living organism it simulates. These systems, called the self-adaptive systems, have radical differences with the two previous ones.

 

1.3 THE SELF-ADAPTIVE SYSTEMS AND THE MOTIVATED ACTION:

A self-adaptive system is a system composed of two different strongly linked parts:

1    A substratum part, rational in way, managing the inputs and the reactive effects as well as the logical and rational automatic actions,

2    A specific part deliberately representing the current situation of the system in its environment, according to some subjective characters and controlling the substratum part. Such system is active for itself with the means of its structure indeed. The system part representing the current situation will be constituted of many entities in permanent reorganization action, adapting this internal organization at a time to the actions towards the environment and also to its own organizational state, like in the brain.

 

2 THE ARTIFICIAL CONSCIOUSNESS AND EMOTIONS GENERATED:

2.1 THE ARTIFICIAL EMOTIONS:

The principle of generation of an emotion is therefore the next one. The input sensors, via agents of interface, make to speed up structuring agents that make speed up the corresponding morphology agents. The incentive organization generates a specific form in the morphology that is to reach: the incentive morphology. The current morphology describing the aspectual activity has to transform itself into the incentive morphology. For that, the aspectual agents have to make some specific activity. The analysis agents control them in that way. The characters of the incentive and of the current morphologies are the determinants for the emotion. If the current morphology has a complicated shape, far away from the incentive morphology, the system expresses a state of tension that it is going to try to systematically reduce. This tendency to the reduction corresponds precisely to the discharge of energy that evoked S. Freud in the antagonistic action of impulses toward states of quietude.

 

The pleasure is the case where the current morphology reaches slowly but steadily the incentive one. After the activity of the interface agents, the aspectual agent organization generates some global activity, a global process geometrically represented by the organization of morphology.

 

The incentive morphology generates a singular pick that activated some corresponding aspectual agents. The current descriptive morphology of the aspectual agents, considered by projection in R2, leads to the constitution of a lot of successive picks, from progressive features, and this shape converges toward the pick of the incentive morphology. In its looped functioning, agents of analysis are going to reduce the morphology to only one prominent shape. There will be reduction of tension and sensation of pleasure, as it describes S. Freud.

 

2.2 CONCLUSION:

The emergence of emotions in a data processing system, or in a robot, has been presented as the stabilization in a while of activities in a complex multi agent system, in an organizational and constructivist way. This emergence, represented by functioning with periodic loops of activities, is the global state of a set of agent organizations.

 

To achieve this type of functioning, some agents, the agents of morphology, represent behaviors of aspectual agents that themselves represents the minimal elements of significance.

 

The fact that the activity of a system is endowed of emotions founds finally on a strong coupling process between the computations that organize it and the representation of computations that permits the own-control of group of agents.

 

The importance of such a process of coupling, binding the parts to the whole, binding groups of agents to their significance represented by agents of morphology, is strong. It is the fundamental principle of the functioning of systems that we called self-adaptive, and are today the alone able to self-control complex systems. It generalizes the notion of feedback and systemic loop and go to the realizations of autonomous system essentially producing states by emergence.

 

A system generating emotions while using a bodily substratum, while proceeding to an organizational emergence thus has, according to us, a complex structure at the level of organization. But this organization can also produce a representation of itself, of its own morphology leading to the notion of "its own body". System can use this morphology as an engagement to act since at a time. Using geometric and cognitive aspects, that is the sign of the organizational semiotics summarizing at the same time the process of reorganization and its result. And the result of this constructive self-observation can be delivered by the system to all the observers. In this sense, such a system can express itself rather than only display values merely. The difference then, brings in such a system that expresses itself subjectively according to its intentions and another one that would proceed to displays complicated information very well adapted for sound the user, is large and makes a kind of rupture in the field, very vast, of the present computer science.

 

3.     THE ARTIFICIAL BRAIN IN REALITY: THE BLUE BRAIN PROJECT:

3.1 AN INSIGHT:

A network of artificial nerves is evolving right now in a Swiss supercomputer. This bizarre creation is capable of simulating a natural brain, cell-for-cell. The Swiss scientists, who created what they have dubbed "Blue Brain", believe it will soon offer a better understanding of human consciousness. This is no sci-fi flick; it’s an actual ‘computer brain’ that may eventually have the ability to think for itself. Exciting? Scary? It could be a little of both.

 

The designers say that "Blue Brain" was willful and unpredictable from day one. When it was first fed electrical impulses, strange patterns began to appear with lightning-like flashes produced by ‘cells’ that the scientists recognized from living human and animal processes.

 

Neurons started interacting with one another until they were firing in rhythm. "It happened entirely on its own," says biologist Henry Markram, the project's director. "Spontaneously."

 

The project essentially has its own factory to produce artificial brains. Their computers can clone nerve cells quickly. The system allows for the production of whole series of neurons of all different types. Because in natural brains, no two cells are exactly identical, the scientists make sure the artificial cells used for the project are also random and unique.

 

Does this ‘Brain’ have a soul? If it does, it is likely to be the shadowy remnants of thousands of sacrificed rats whose brains were almost literally fed into the computer. After opening the rat skulls and slicing their brains into thin sections, the scientists kept the slices alive. Tiny sensors picked up individual neurons, recorded how the cells fired off neurons and the adjacent cells’ responses. In this way the scientists were able to collect entire repertoires of actual rat behavior-basically how a rat would respond in different situations throughout a rat's life.

 

The researchers say it wouldn't present much of a technological challenge to bring the brain to life. "We could simply connect a robot to the brain model," says Markram. "Then we could see how it reacts to real environments."

 

Although over ten thousand artificial nerve cells have already been woven in, the researchers plan to increase the number to one million by next year. The researchers are already working with IBM experts on plans for a computer that would operate at inconceivable speeds – something fast enough to simulate the human brain. The project is scheduled to last beyond 2015, at which point the team hopes to be ready for their primary goal: a computer model of an entire human brain.

 

3. 2 WHAT IS THE BLUE BRAIN PROJECT ALL ABOUT?:

The Blue Brain project is the first comprehensive attempt to reverse-engineer the mammalian brain, in order to understand brain function and dysfunction through detailed simulations.

 

Fig 2 Blue Brain project

3.3 INTERPRETING THE RESULTS:

Running the Blue Brain simulation generates huge amounts of data. Analyses of individual neurons must be repeated thousands of times. And analyses dealing with the network activity must deal with data that easily reaches hundreds of gigabytes per second of simulation. Using massively parallel computers the data can be analyzed where it is created (server-side analysis for experimental data, online analysis during simulation). Given the geometric complexity of the column, a visual exploration of the circuit is an important part of the analysis. Mapping the simulation data onto the morphology is invaluable for an immediate verification of single cell activity as well as network phenomena. Architects at EPFL have worked with the Blue Brain developers to design a visualization interface that translates the Blue Gene data into a 3D visual representation of the column. A different supercomputer is used for this computationally intensive task. The visualization of the neurons' shapes is a challenging task given the fact that a column of 10,000 neurons rendered in high quality mesh (see picture) accounts for essentially 1 billion triangles for which about 100GB of management data is required. Simulation data with a resolution of electrical compartments for each neuron accounts for another 150GB. As the electrical impulse travels through the column, neurons light up and change color as they become electrically active.

 

A visual interface makes it possible to quickly identify areas of interest that can then be studied more extensively using further simulations. A visual representation can also be used to compare the simulation results with experiments that show electrical activity in the brain. This calibration - comparing the functioning of the Blue Brain circuit with experiment, improving and fine-tuning it - is the second stage of the Blue Brain project, expected to be complete by the end of 2007.

 

 

Fig 3 Blue brain circuit

 

An international team of scientists in Europe has created a silicon chip designed to function like a human brain. With 200,000 neurons linked up by 50 million synaptic connections, the chip is able to mimic the brain's ability to learn more closely than any other machine.

 

Although the chip has a fraction of the number of neurons or connections found in a brain, its design allows it to be scaled up, says Karlheinz Meier, a physicist at Heidelberg University, in Germany, who has coordinated the Fast Analog Computing with Emergent Transient States project, or FACETS. The hope is that recreating the structure of the brain in computer form may help to further our understanding of how to develop massively parallel, powerful new computers, says Meier.

 

This is not the first time someone has tried to recreate the workings of the brain. One effort called the Blue Brain project, run by Henry Markram at the Ecole Polytechnique Fédérale de Lausanne, in Switzerland, has been using vast databases of biological data recorded by neurologists to create a hugely complex and realistic simulation of the brain on an IBM supercomputer.

 

FACETS has been tapping into the same databases. "But rather than simulating neurons," says Karlheinz, "we are building them." Using a standard eight-inch silicon wafer, the researchers recreate the neurons and synapses as circuits of transistors and capacitors, designed to produce the same sort of electrical activity as their biological counterparts.

 

A neuron circuit typically consists of about 100 components, while a synapse requires only about 20. However, because there are so much more of them, the synapses take up most of the space on the wafer, says Karlheinz.

 

The advantage of this hardwired approach, as opposed to a simulation, Karlheinz continues, is that it allows researchers to recreate the brain-like structure in a way that is truly parallel. Getting simulations to run in real time requires huge amounts of computing power. Plus, physical models are able to run much faster and are more scalable. In fact, the current prototype can operate about 100,000 times faster than a real human brain. "We can simulate a day in a second," says Karlheinz.

 

4. The brain, neural networks and computer

4.1 Introduction

Neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated, as it is not clear to what degree artificial neural networks mirror brain function.

 

 

Fig 4 Computer simulation of the branching architecture of the dendrites of pyramidal neurons.

 

A subject of current research in theoretical neuroscience is the question surrounding the degree of complexity and the properties that individual neural elements should have to reproduce something resembling animal intelligence.

 

Historically, computers evolved from the von Neumann architecture, which is based on sequential processing and execution of explicit instructions. On the other hand, the origins of neural networks are based on efforts to model information processing in biological systems, which may rely largely on parallel processing as well as implicit instructions based on recognition of patterns of 'sensory' input from external sources. In other words, at its very heart a neural network is a complex statistical processor (as opposed to being tasked to sequentially process and execute).

 

Neural coding is concerned with how sensory and other information is represented in the brain by neurons. The main goal of studying neural coding is to characterize the relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among electrical activity of the neurons in the ensemble. It is thought that neurons can encode both digital and analog information.

 

4.2 NEURAL NETWORKS AND ARTIFICIAL INTELLIGENCE:

A neural network (NN), in the case of artificial neurons called artificial neural network (ANN) or simulated neural network (SNN), is an interconnected group of natural or artificial neurons that uses a mathematical or computational model for information processing based on a connectionistic approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.

 

In more practical terms neural networks are non-linear statistical data modeling or decision making tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.

 

However, the paradigm of neural networks - i.e., implicit, not explicit , learning is stressed - seems more to correspond to some kind of natural intelligence than to the traditional symbol-based Artificial Intelligence, which would stress, instead, rule-based learning.

 

4.3 BACKGROUND:

An artificial neural network involves a network of simple processing elements (artificial neurons) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters. Artificial neurons were first proposed in 1943 by Warren McCulloch, a neurophysiologist, and Walter Pitts, a logician, who first collaborated at the University of Chicago.

 

One classical type of artificial neural network is the recurrent Hopfield net.

 

In a neural network model simple nodes (which can be called by a number of names, including "neurons", "neurodes", "Processing Elements" (PE) and "units"), are connected together to form a network of nodes — hence the term "neural network". While a neural network does not have to be adaptive per se, its practical use comes with algorithms designed to alter the strength (weights) of the connections in the network to produce a desired signal flow.

 

In modern software implementations of artificial neural networks the approach inspired by biology has more or less been abandoned for a more practical approach based on statistics and signal processing. In some of these systems, neural networks, or parts of neural networks (such as artificial neurons), are used as components in larger systems that combine both adaptive and non-adaptive elements.

 

The concept of a neural network appears to have first been proposed by Alan Turing in his 1948 paper "Intelligent Machinery".

 

4.4 APPLICATIONS OF NATURAL AND OF ARTIFICIAL NEURAL NETWORKS:

The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations and also to use it. Unsupervised neural networks can also be used to learn representations of the input that capture the salient characteristics of the input distribution, e.g., see the Boltzmann machine (1983), and more recently, deep learning algorithms, which can implicitly learn the distribution function of the observed data. Learning in neural networks is particularly useful in applications where the complexity of the data or task makes the design of such functions by hand impractical.

The tasks to which artificial neural networks are applied tend to fall within the following broad categories:

       Function approximation, or regression analysis, including time series prediction and modeling.

       Classification, including pattern and sequence recognition, novelty detection and sequential decision making.

       Data processing, including filtering, clustering, blind signal separation and compression.

 

Application areas of ANNs include

 

system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, data mining (or knowledge discovery in databases, "KDD"), visualization and e-mail spam filtering.

 

4.5 SCIENTISTS CREATE TINY ARTIFICIAL BRAIN THAT EXHIBITS 12 SECONDS OF SHORT TERM MEMORY

 

Fig 5 short term memory

 

This Artificial Rat Brain Has 12 Seconds of Short-term Memory Ashwin Vishwanathan, Guo-Qiang Bi and Henry C. Zeringue, University of Pittsburgh. It’s not artificial intelligence in the Turing test sense, but the technicolor ring you see above is actually an artificial microbrain, derived from rat brain cells--just 40 to 60 neurons in total-- that is capable of about 12 seconds of short-term memory.

 

Developed by a team at the University of Pittsburgh, the brain was created in an attempt to artificially nurture a working brain into existence so that researchers could study neural networks and how our brains transmit electrical signals and store data so efficiently. They did so by attaching a layer of proteins to a silicon disk and adding brain cells from embryonic rats that attached themselves to the proteins and grew to connect with one another in the ring seen above.

 

But as if the growing of a tiny, functioning, donut-shaped brain in a Petri dish wasn’t enough, the team found that when they stimulate the neurons with electricity, the pulse would circulate the microbrain for a full 12 seconds. That’s roughly 12 seconds longer than they thought it would (they expected the pulse to live for about a quarter of a second).

 

That’s essentially short-term memory. The neurons were relaying the signal in sequence, persistently, mimicking the activity we know as working memory (though admittedly we don’t understand it that well). The brain is basically storing the stimulus long after the stimulus is no more, which is a big deal for a tiny brain grown in a dish.

 

4.6 A SYNTHETIC BRAIN SYNAPSE IS CONSTRUCTED FROM CARBON NANOTUBES

 

 

Fig 6 Synthetic Synapse USC Viterbi School of Engineering

 

Building a synthetic brain is no easy undertaking, but researchers working on the problem have to start somewhere. In doing so, engineers at the University of Southern California have taken a huge step by building a synthetic synapse from carbon nanotubes.

 

In tests, their synapse circuit functions very much like a real neuron--neurons being the very building blocks of the brain. Tapping the unique properties of carbon nanotubes, their lab was able to essentially recreate brain function in a very fractional way.

 

Of course, duplicating synapse firings in a nanotube circuit and creating synthetic brain function are two very different things. The human brain, as we well know, is very complex and hardly static like the inner workings of a computer. Over time it makes new connections, adapts to changes, and produces new neurons.

 

But while a functioning synthetic brain may be decades away, the synthetic synapse is here now, which could help researchers model neuron communications and otherwise begin building, from the ground up, an artificial mimic of one of biology’s biggest mysteries.

 

 

Fig 7 Brain as circuit

 

The axon hillock is located at the end of the soma and controls the firing of the neuron. If the total strength of the signal exceeds the threshold limit of the axon hillock, the structure will fire a signal (known as an action potential) down the axon. It takes about 0.5 millisecond (ms) for them to bind to receptors on postsynaptic cells.

 

 Neurons are organized into circuits. In a reflex arc, such as the knee-jerk reflex, interneurons connect multiple sensory and motor neurons, allowing one sensory neuron to affect multiple motor neurons. One muscle can be stimulated to contract while another is inhibited from contracting.

 

In neuroscience, a biological neural network describes a population of physically interconnected neurons or a group of disparate neurons whose inputs or signalling targets define a recognizable circuit. Communication between neuron soften involves an electrochemical process. The interface through which they interact with surrounding neurons usually consists of several dendrites (input connections), which are connected via synapses to other neurons, and one axon (output connection). If the sum of the input signals surpasses a certain threshold, the neuron sends an action potential (AP) at the axon hillock and transmits this electrical signal along the axon.

 

5. REFERENCES:

1.        Y. Taigman, M. Yang, M. Ranzato, L. Wolf, “DeepFace: Closing the Gap to Human-Level Performance in Face Verification,” IEEE International Conference on Computer Vision and Pattern Recognition (CVPR2014), pp.1-8, 2014.

2.        Stanford Artificial Intelligence Laboratory, http://ai.stanford.edu/ (Accessed on 2017/4/20).

3.        MIT BigDog, https://slice.mit.edu/big-dog/ (Accessed on 2017/4/20).

4.       The 4th Science and Technology Basic Plan of Japan, http://www8.cao.go.jp/cstp/english/basic/

5.       M. Khan, D. Lester, L. Plana, A. Rast, X. Jin, E. Painkras, S. Furber, “SpiiNNaker: Mapping neural networks onto a massively-parallel chip multiprocessor,” In Proc of IEEE International Joint Conference on Neural Networks, pp.2849-2856, 2008.

6.        M. Lacity, L. Willcocks, “A new approach to automating services,” MIT Sloan Management Review, vol.2016, pp.1-16, 2016.

7.        T Mikolov, M. Karafiat, L. Burget, J. Cernocky, S. Khudanpur, “Recurrent neural network based language model,” In Proc of Interspeech2010, pp. 1045-1048, 2010.

8.        M. Schuster, K. Paliwal, “Bidirectional recurrent neural networks,” IEEE Transactions on Signal Processing, vol.45, no.11, pp.2673-2681, 1997.

9.        A. Graves, N. Jaitly, A. Mohamed, “Hybrid speech recognition with deep bidirectional LSTM,” In Proc of IEEE Workshop on Automatic Speech Recognition and Understanding, pp.1-4, 2013.

10.      A. Graves, J. Schmidhuber, “Framewise phoneme classification with bidirectional LSTM and other neural network architecutres,” Neural Networks, vol.18, no.5-6, pp.602-610, 2005.

11.      A. Mishra, V. Desai, “Drought forecasting using feed-forward recursive neural network,” Ecological Modelling, vol.198, no.1-2, pp.127-138, 2006.

12.      A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and F. Li, “Large-scale video classification with convolutional neural networks,” In Proc of IEEE Conference on Computer Vision and Pattern Recognition, pp.1725-1732, 2014.

13.      Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel, “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol.1, no.4, pp.541-551, 1989.

14.      R. Bell, and Y. Koren, “Lessons from the Netflix prize challenge,” ACM SIGKDD Explorations Newsletter, vol.9, no.2, pp.75-79, 2007.

15.      K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” In Proc of IEEE ICLR2015, pp.1-14, 2015.

 

 

 

 

Received on 26.02.2019       Modified on 26.03.2019

Accepted on 20.04.2019      ©A&V Publications All right reserved

Research J. Science and Tech. 2019; 11(2):113-121.

DOI: 10.5958/2349-2988.2019.00018.4