GLORIA

GEOMAR Library Ocean Research Information Access

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Online Resource
    Online Resource
    MIT Press ; 2003
    In:  Neural Computation Vol. 15, No. 3 ( 2003-03-01), p. 621-638
    In: Neural Computation, MIT Press, Vol. 15, No. 3 ( 2003-03-01), p. 621-638
    Abstract: The richness and complexity of recurrent cortical circuits is an inexhaustible source of inspiration for thinking about high-level biological computation. In past theoretical studies, constraints on the synaptic connection patterns of threshold-linear networks were found that guaranteed bounded network dynamics, convergence to attractive fixed points, and multistability, all fundamental aspects of cortical information processing. However, these conditions were only sufficient, and it remained unclear which were the minimal (necessary) conditions for convergence and multistability. We show that symmetric threshold-linear networks converge to a set of attractive fixed points if and only if the network matrix is copositive. Furthermore, the set of attractive fixed points is nonconnected (the network is multiattractive) if and only if the network matrix is not positive semidefinite. There are permitted sets of neurons that can be coactive at a stable steady state and forbidden sets that cannot. Permitted sets are clustered in the sense that subsets of permitted sets are permitted and supersets of forbidden sets are forbidden. By viewing permitted sets as memories stored in the synaptic connections, we provide a formulation of long-term memory that is more general than the traditional perspective of fixed-point attractor networks. There is a close correspondence between threshold-linear networks and networks defined by the generalized Lotka-Volterra equations.
    Type of Medium: Online Resource
    ISSN: 0899-7667 , 1530-888X
    Language: English
    Publisher: MIT Press
    Publication Date: 2003
    detail.hit.zdb_id: 1025692-1
    detail.hit.zdb_id: 1498403-9
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    MIT Press ; 2002
    In:  Neural Computation Vol. 14, No. 11 ( 2002-11-01), p. 2627-2646
    In: Neural Computation, MIT Press, Vol. 14, No. 11 ( 2002-11-01), p. 2627-2646
    Abstract: Winner-take-all networks have been proposed to underlie many of the brain's fundamental computational abilities. However, notmuchisknown about how to extend the grouping of potential winners in these networks beyond single neuron or uniformly arranged groups of neurons. We show that competition between arbitrary groups of neurons can be realized by organizing lateral inhibition in linear threshold networks. Given a collection of potentially overlapping groups (with the exception of some degenerate cases), the lateral inhibition results in network dynamics such that any permitted set of neurons that can be coactivated by some input at a stable steady state is contained in one of the groups. The information about the input is preserved in this operation. The activity level of a neuron in a permitted set corresponds to its stimulus strength, amplified by some constant. Sets of neurons that are not part of a group cannot be coactivated by any input at a stable steady state. We analyze the storage capacity of such a network for random groups—the number of random groups the network can store as permitted sets without creating too many spurious ones. In this framework, we calculate the optimal sparsity of the groups (maximizing group entropy). We find that for dense inputs, the optimal sparsity is unphysiologically small. However, when the inputs and the groups are equally sparse, we derive a more plausible optimal sparsity. We believe our results are the first steps toward attractor theories in hybrid analog-digital networks.
    Type of Medium: Online Resource
    ISSN: 0899-7667 , 1530-888X
    Language: English
    Publisher: MIT Press
    Publication Date: 2002
    detail.hit.zdb_id: 1025692-1
    detail.hit.zdb_id: 1498403-9
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    MIT Press ; 2003
    In:  Neural Computation Vol. 15, No. 2 ( 2003-02-01), p. 441-454
    In: Neural Computation, MIT Press, Vol. 15, No. 2 ( 2003-02-01), p. 441-454
    Abstract: Backpropagation and contrastive Hebbian learning are two methods of training networks with hidden neurons. Backpropagation computes an error signal for the output neurons and spreads it over the hidden neurons. Contrastive Hebbian learning involves clamping the output neurons at desired values and letting the effect spread through feedback connections over the entire network. To investigate the relationship between these two forms of learning, we consider a special case in which they are identical: a multilayer perceptron with linear output units, to which weak feedback connections have been added. In this case, the change in network state caused by clamping the output neurons turns out to be the same as the error signal spread by backpropagation, except for a scalar prefactor. This suggests that the functionality of backpropagation can be realized alternatively by a Hebbian-type learning algorithm, which is suitable for implementation in biological networks.
    Type of Medium: Online Resource
    ISSN: 0899-7667 , 1530-888X
    Language: English
    Publisher: MIT Press
    Publication Date: 2003
    detail.hit.zdb_id: 1025692-1
    detail.hit.zdb_id: 1498403-9
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    MIT Press ; 2022
    In:  Neural Computation Vol. 34, No. 7 ( 2022-06-16), p. 1616-1635
    In: Neural Computation, MIT Press, Vol. 34, No. 7 ( 2022-06-16), p. 1616-1635
    Abstract: Sparse coding has been proposed as a theory of visual cortex and as an unsupervised algorithm for learning representations. We show empirically with the MNIST data set that sparse codes can be very sensitive to image distortions, a behavior that may hinder invariant object recognition. A locally linear analysis suggests that the sensitivity is due to the existence of linear combinations of active dictionary elements with high cancellation. A nearest-neighbor classifier is shown to perform worse on sparse codes than original images. For a linear classifier with a sufficiently large number of labeled examples, sparse codes are shown to yield higher accuracy than original images, but no higher than a representation computed by a random feedforward net. Sensitivity to distortions seems to be a basic property of sparse codes, and one should be aware of this property when applying sparse codes to invariant object recognition.
    Type of Medium: Online Resource
    ISSN: 0899-7667 , 1530-888X
    Language: English
    Publisher: MIT Press
    Publication Date: 2022
    detail.hit.zdb_id: 1025692-1
    detail.hit.zdb_id: 1498403-9
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 5
    Online Resource
    Online Resource
    MIT Press ; 2005
    In:  Neural Computation Vol. 17, No. 12 ( 2005-12-01), p. 2699-2718
    In: Neural Computation, MIT Press, Vol. 17, No. 12 ( 2005-12-01), p. 2699-2718
    Abstract: Gradient-following learning methods can encounter problems of implementation in many applications, and stochastic variants are sometimes used to overcome these difficulties. We analyze three online training methods used with a linear perceptron: direct gradient descent, node perturbation, and weight perturbation. Learning speed is defined as the rate of exponential decay in the learning curves. When the scalar parameter that controls the size of weight updates is chosen to maximize learning speed, node perturbation is slower than direct gradient descent by a factor equal to the number of output units; weight perturbation is slower still by an additional factor equal to the number of input units. Parallel perturbation allows faster learning than sequential perturbation, by a factor that does not depend on network size. We also characterize how uncertainty in quantities used in the stochastic updates affects the learning curves. This study suggests that in practice, weight perturbation may be slow for large networks, and node perturbation can have performance comparable to that of direct gradient descent when there are few output units. However, these statements depend on the specifics of the learning problem, such as the input distribution and the target function, and are not universally applicable.
    Type of Medium: Online Resource
    ISSN: 0899-7667 , 1530-888X
    Language: English
    Publisher: MIT Press
    Publication Date: 2005
    detail.hit.zdb_id: 1025692-1
    detail.hit.zdb_id: 1498403-9
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 6
    In: Neural Computation, MIT Press, Vol. 22, No. 2 ( 2010-02), p. 511-538
    Abstract: Many image segmentation algorithms first generate an affinity graph and then partition it. We present a machine learning approach to computing an affinity graph using a convolutional network (CN) trained using ground truth provided by human experts. The CN affinity graph can be paired with any standard partitioning algorithm and improves segmentation accuracy significantly compared to standard hand-designed affinity functions. We apply our algorithm to the challenging 3D segmentation problem of reconstructing neuronal processes from volumetric electron microscopy (EM) and show that we are able to learn a good affinity graph directly from the raw EM images. Further, we show that our affinity graph improves the segmentation accuracy of both simple and sophisticated graph partitioning algorithms. In contrast to previous work, we do not rely on prior knowledge in the form of hand-designed image features or image preprocessing. Thus, we expect our algorithm to generalize effectively to arbitrary image types.
    Type of Medium: Online Resource
    ISSN: 0899-7667 , 1530-888X
    Language: English
    Publisher: MIT Press
    Publication Date: 2010
    detail.hit.zdb_id: 1025692-1
    detail.hit.zdb_id: 1498403-9
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
  • 7
    Online Resource
    Online Resource
    MIT Press ; 2009
    In:  Neural Computation Vol. 21, No. 10 ( 2009-10), p. 2755-2773
    In: Neural Computation, MIT Press, Vol. 21, No. 10 ( 2009-10), p. 2755-2773
    Abstract: Over the past several decades, economists, psychologists, and neuroscientists have conducted experiments in which a subject, human or animal, repeatedly chooses between alternative actions and is rewarded based on choice history. While individual choices are unpredictable, aggregate behavior typically follows Herrnstein's matching law: the average reward per choice is equal for all chosen alternatives. In general, matching behavior does not maximize the overall reward delivered to the subject, and therefore matching appears inconsistent with the principle of utility maximization. Here we show that matching can be made consistent with maximization by regarding the choices of a single subject as being made by a sequence of multiple selves—one for each instant of time. If each self is blind to the state of the world and discounts future rewards completely, then the resulting game has at least one Nash equilibrium that satisfies both Herrnstein's matching law and the unpredictability of individual choices. This equilibrium is, in general, Pareto suboptimal, and can be understood as a mutual defection of the multiple selves in an intertemporal prisoner's dilemma. The mathematical assumptions about the multiple selves should not be interpreted literally as psychological assumptions. Human and animals do remember past choices and care about future rewards. However, they may be unable to comprehend or take into account the relationship between past and future. This can be made more explicit when a mechanism that converges on the equilibrium, such as reinforcement learning, is considered. Using specific examples, we show that there exist behaviors that satisfy the matching law but are not Nash equilibria. We expect that these behaviors will not be observed experimentally in animals and humans. If this is the case, the Nash equilibrium formulation can be regarded as a refinement of Herrnstein's matching law.
    Type of Medium: Online Resource
    ISSN: 0899-7667 , 1530-888X
    Language: English
    Publisher: MIT Press
    Publication Date: 2009
    detail.hit.zdb_id: 1025692-1
    detail.hit.zdb_id: 1498403-9
    Location Call Number Limitation Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...