060116: Multilayer In-place Learning Networks
Case ID:
TEC2006-0116
Web Published:
7/21/2014
Description:
Creating a neural network that can be fully and
automatically trained for classification or regression analysis has been a great
challenge. The well-known methods such as feed-forward networks with
back-propagation learning, radial-basis functions, support vector machines,
cascade-correlation learning architecture, and independent component analysis do
not consider optimal statistical efficiency, and therefore suffer from a variety
of problems such as local minima and unnecessarily large memory
requirements.
Description
This technology is a network design for multi-layer
neural-network representations suited for classification and regression
analysis. It introduces a new, recurrent network architecture that includes
bottom-up, lateral, top-down, and out-of-network projections, a new near-optimal
in-place learning method, and the integration of unsupervised learning and
supervised learning through every layer of the network.
In-place
learning is a biological concept where each neuron is fully responsible for its
own learning in its environment and there is no need for an external learning
network. This results in a simple overall network architecture. Computationally,
in-place learning provides unusually efficient learning algorithms whose
simplicity, low computational complexity, and generality are set apart from
typical conventional learning algorithms.
This technology provides fully
automatic internal self-organization that enables it to autonomously learn
skills or tasks through autonomous interactions with its environment. After
being trained on a set of samples, the network balances between two conflicting
criteria afforded by the training set: global within-class invariance and global
between-class discrimination.
Benefits
- Broad
applicability: Applicable to wide range of
applications.
- Optimal
statistical efficiency: Unlike established methods that suffer from
architectural-specific problems, the subject invention considers statistical
efficiency and almost completely eliminates the local minima problem, which is
common in high dimensional networks for classification or regression
analysis.
- Low
computational complexity: This is expected to lead to improved
performance and reduced memory requirements.
- Ease of
use: This network is easy to use, with few user-selected
parameters.
Applications
This technology can serve as a core engine for a
wide variety of applications such as face, object, character, or biometric data
recognition, image analysis, stock value prediction, financial data analysis
such as for automated trading systems, and intelligent robots. It can be
implemented in software, hardware, or a combination thereof.
Development
Status
The invention has been fully designed.
IP Protection
Status
1 U.S. patent issued:
7,711,663
Patent Information:
App Type |
Country |
Serial No. |
Patent No. |
File Date |
Issued Date |
Expire Date |
For Information, Contact:
Raymond Devito
Technology Manager
Michigan State University - Test
517-355-2186
devitora@msu.edu