(Image Caption: The Prefrontal Cortex Connects To A Very Specific Region Of The Brainstem (the PAG) Through

(Image Caption: The Prefrontal Cortex Connects To A Very Specific Region Of The Brainstem (the PAG) Through

(Image caption: The prefrontal cortex connects to a very specific region of the brainstem (the PAG) through prefrontal cortical neurons: those labeled in purple directly project to the PAG and control our instinctive behaviours. Credit: EMBL/Livia Marrone)

Neural connection keeps instincts in check

From fighting the urge to hit someone to resisting the temptation to run off stage instead of giving that public speech, we are often confronted with situations where we have to curb our instincts. Scientists at EMBL have traced exactly which neuronal projections prevent social animals like us from acting out such impulses. The study, published online in Nature Neuroscience, could have implications for schizophrenia and mood disorders like depression.

“Instincts like fear and sex are important, but you don’t want to be acting on them all the time,” says Cornelius Gross, who led the work at EMBL. “We need to be able to dynamically control our instinctive behaviours, depending on the situation.”

The driver of our instincts is the brainstem – the region at the very base of your brain, just above the spinal cord. Scientists have known for some time that another brain region, the prefrontal cortex, plays a role in keeping those instincts in check (see background information down below). But exactly how the prefrontal cortex puts a break on the brainstem has remained unclear.

Now, Gross and colleagues have literally found the connection between prefrontal cortex and brainstem. The EMBL scientists teamed up with Tiago Branco’s lab at MRC LMB, and traced connections between neurons in a mouse brain. They discovered that the prefrontal cortex makes prominent connections directly to the brainstem.

Gross and colleagues went on to confirm that this physical connection was the brake that inhibits instinctive behaviour. They found that in mice that have been repeatedly defeated by another mouse – the murine equivalent to being bullied – this connection weakens, and the mice act more scared. The scientists found that they could elicit those same fearful behaviours in mice that had never been bullied, simply by using drugs to block the connection between prefrontal cortex and brainstem.

These findings provide an anatomical explanation for why it’s much easier to stop yourself from hitting someone than it is to stop yourself from feeling aggressive. The scientists found that the connection from the prefrontal cortex is to a very specific region of the brainstem, called the PAG, which is responsible for the acting out of our instincts. However, it doesn’t affect the hypothalamus, the region that controls feelings and emotions. So the prefrontal cortex keeps behaviour in check, but doesn’t affect the underlying instinctive feeling: it stops you from running off-stage, but doesn’t abate the butterflies in your stomach.

The work has implications for schizophrenia and mood disorders such as depression, which have been linked to problems with prefrontal cortex function and maturation.

“One fascinating implication we’re looking at now is that we know the pre-frontal cortex matures during adolescence. Kids are really bad at inhibiting their instincts; they don’t have this control,” says Gross, “so we’re trying to figure out how this inhibition comes about, especially as many mental illnesses like mood disorders are typically adult-onset.”

More Posts from R3ds3rpent and Others

9 years ago
Leonardo DiCaprio Accepts His Award For Best Actor In A Drama Film For ‘The Revenant’ And Dedicates
Leonardo DiCaprio Accepts His Award For Best Actor In A Drama Film For ‘The Revenant’ And Dedicates
Leonardo DiCaprio Accepts His Award For Best Actor In A Drama Film For ‘The Revenant’ And Dedicates
Leonardo DiCaprio Accepts His Award For Best Actor In A Drama Film For ‘The Revenant’ And Dedicates

Leonardo DiCaprio accepts his award for Best Actor in a Drama Film for ‘The Revenant’ and dedicates it to indigenous communities.

9 years ago

soo cool

r3ds3rpent - Kode, Transistors and Spirit
9 years ago
Game Of Thrones Filming Locations
Game Of Thrones Filming Locations
Game Of Thrones Filming Locations
Game Of Thrones Filming Locations
Game Of Thrones Filming Locations
Game Of Thrones Filming Locations
Game Of Thrones Filming Locations

Game of Thrones Filming Locations

9 years ago
How I Feel When I Write Anything In C++

How I feel when I write anything in C++

8 years ago
Model Sheds Light On Purpose Of Inhibitory Neurons

Model sheds light on purpose of inhibitory neurons

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have developed a new computational model of a neural circuit in the brain, which could shed light on the biological role of inhibitory neurons — neurons that keep other neurons from firing.

The model describes a neural circuit consisting of an array of input neurons and an equivalent number of output neurons. The circuit performs what neuroscientists call a “winner-take-all” operation, in which signals from multiple input neurons induce a signal in just one output neuron.

Using the tools of theoretical computer science, the researchers prove that, within the context of their model, a certain configuration of inhibitory neurons provides the most efficient means of enacting a winner-take-all operation. Because the model makes empirical predictions about the behavior of inhibitory neurons in the brain, it offers a good example of the way in which computational analysis could aid neuroscience.

The researchers presented their results at the conference on Innovations in Theoretical Computer Science. Nancy Lynch, the NEC Professor of Software Science and Engineering at MIT, is the senior author on the paper. She’s joined by Merav Parter, a postdoc in her group, and Cameron Musco, an MIT graduate student in electrical engineering and computer science.

For years, Lynch’s group has studied communication and resource allocation in ad hoc networks — networks whose members are continually leaving and rejoining. But recently, the team has begun using the tools of network analysis to investigate biological phenomena.

“There’s a close correspondence between the behavior of networks of computers or other devices like mobile phones and that of biological systems,” Lynch says. “We’re trying to find problems that can benefit from this distributed-computing perspective, focusing on algorithms for which we can prove mathematical properties.”

Artificial neurology

In recent years, artificial neural networks — computer models roughly based on the structure of the brain — have been responsible for some of the most rapid improvement in artificial-intelligence systems, from speech transcription to face recognition software.

An artificial neural network consists of “nodes” that, like individual neurons, have limited information-processing power but are densely interconnected. Data are fed into the first layer of nodes. If the data received by a given node meet some threshold criterion — for instance, if it exceeds a particular value — the node “fires,” or sends signals along all of its outgoing connections.

Each of those outgoing connections, however, has an associated “weight,” which can augment or diminish a signal. Each node in the next layer of the network receives weighted signals from multiple nodes in the first layer; it adds them together, and again, if their sum exceeds some threshold, it fires. Its outgoing signals pass to the next layer, and so on.

In artificial-intelligence applications, a neural network is “trained” on sample data, constantly adjusting its weights and firing thresholds until the output of its final layer consistently represents the solution to some computational problem.

Biological plausibility

Lynch, Parter, and Musco made several modifications to this design to make it more biologically plausible. The first was the addition of inhibitory “neurons.” In a standard artificial neural network, the values of the weights on the connections are usually positive or capable of being either positive or negative. But in the brain, some neurons appear to play a purely inhibitory role, preventing other neurons from firing. The MIT researchers modeled those neurons as nodes whose connections have only negative weights.

Many artificial-intelligence applications also use “feed-forward” networks, in which signals pass through the network in only one direction, from the first layer, which receives input data, to the last layer, which provides the result of a computation. But connections in the brain are much more complex. Lynch, Parter, and Musco’s circuit thus includes feedback: Signals from the output neurons pass to the inhibitory neurons, whose output in turn passes back to the output neurons. The signaling of the output neurons also feeds back on itself, which proves essential to enacting the winner-take-all strategy.

Finally, the MIT researchers’ network is probabilistic. In a typical artificial neural net, if a node’s input values exceed some threshold, the node fires. But in the brain, increasing the strength of the signal traveling over an input neuron only increases the chances that an output neuron will fire. The same is true of the nodes in the researchers’ model. Again, this modification is crucial to enacting the winner-take-all strategy.

In the researchers’ model, the number of input and output neurons is fixed, and the execution of the winner-take-all computation is purely the work of a bank of auxiliary neurons. “We are trying to see the trade-off between the computational time to solve a given problem and the number of auxiliary neurons,” Parter explains. “We consider neurons to be a resource; we don’t want too spend much of it.”

Inhibition’s virtues

Parter and her colleagues were able to show that with only one inhibitory neuron, it’s impossible, in the context of their model, to enact the winner-take-all strategy. But two inhibitory neurons are sufficient. The trick is that one of the inhibitory neurons — which the researchers call a convergence neuron — sends a strong inhibitory signal if more than one output neuron is firing. The other inhibitory neuron — the stability neuron — sends a much weaker signal as long as any output neurons are firing.

The convergence neuron drives the circuit to select a single output neuron, at which point it stops firing; the stability neuron prevents a second output neuron from becoming active once the convergence neuron has been turned off. The self-feedback circuits from the output neurons enhance this effect. The longer an output neuron has been turned off, the more likely it is to remain off; the longer it’s been on, the more likely it is to remain on. Once a single output neuron has been selected, its self-feedback circuit ensures that it can overcome the inhibition of the stability neuron.

Without randomness, however, the circuit won’t converge to a single output neuron: Any setting of the inhibitory neurons’ weights will affect all the output neurons equally. “You need randomness to break the symmetry,” Parter explains.

The researchers were able to determine the minimum number of auxiliary neurons required to guarantee a particular convergence speed and the maximum convergence speed possible given a particular number of auxiliary neurons.

Adding more convergence neurons increases the convergence speed, but only up to a point. For instance, with 100 input neurons, two or three convergence neurons are all you need; adding a fourth doesn’t improve efficiency. And just one stability neuron is already optimal.

But perhaps more intriguingly, the researchers showed that including excitatory neurons — neurons that stimulate, rather than inhibit, other neurons’ firing — as well as inhibitory neurons among the auxiliary neurons cannot improve the efficiency of the circuit. Similarly, any arrangement of inhibitory neurons that doesn’t observe the distinction between convergence and stability neurons will be less efficient than one that does.

Assuming, then, that evolution tends to find efficient solutions to engineering problems, the model suggests both an answer to the question of why inhibitory neurons are found in the brain and a tantalizing question for empirical research: Do real inhibitory neurons exhibit the same division between convergence neurons and stability neurons?

“This computation of winner-take-all is quite a broad and useful motif that we see throughout the brain,” says Saket Navlakha, an assistant professor in the Integrative Biology Laboratory at the Salk Institute for Biological Studies. “In many sensory systems — for example, the olfactory system — it’s used to generate sparse codes.”

“There are many classes of inhibitory neurons that we’ve discovered, and a natural next step would be to see if some of these classes map on to the ones predicted in this study,” he adds.

“There’s a lot of work in neuroscience on computational models that take into account much more detail about not just inhibitory neurons but what proteins drive these neurons and so on,” says Ziv Bar-Joseph, a professor of computer science at Carnegie Mellon University. “Nancy is taking a global view of the network rather than looking at the specific details. In return she gets the ability to look at some larger-picture aspects. How many inhibitory neurons do you really need? Why do we have so few compared to the excitatory neurons? The unique aspect here is that this global-scale modeling gives you a much higher-level type of prediction.”

10 years ago

This tutorial shows simple step-by-step instructions to get the BMP180 pressure sensor wired up and programmed with the Arduino Microcontroller. We show how …


Tags
7 years ago
(Image Caption: Diagram Of The Research Findings (Taken From Article’s Table Of Contents Image) BFGF

(Image caption: diagram of the research findings (Taken from article’s Table of Contents Image) bFGF is produced in the injured zone of the cerebral cortex. Ror2 expression is induced in some population of the astrocytes that receive the bFGF signal, restarting their proliferation by accelerating the progression of their cell cycle)

How brain tissue recovers after injury: the role of astrocytes

A research team led by Associate Professor Mitsuharu ENDO and Professor Yasuhiro MINAMI (both from the Department of Physiology and Cell Biology, Graduate School of Medicine, Kobe University) has pinpointed the mechanism underlying astrocyte-mediated restoration of brain tissue after an injury. This could lead to new treatments that encourage regeneration by limiting damage to neurons incurred by reduced blood supply or trauma. The findings were published on October 11 in the online version of GLIA.

When the brain is damaged by trauma or ischemia (restriction in blood supply), immune cells such as macrophages and lymphocytes dispose of the damaged neurons with an inflammatory response. However, an excessive inflammatory response can also harm healthy neurons.

Astrocytes are a type of glial cell*, and the most numerous cell within the human cerebral cortex. In addition to their supportive role in providing nutrients to neurons, studies have shown that they have various other functions, including the direct or active regulation of neuronal activities.

It has recently become clear that astrocytes also have an important function in the restoration of injured brain tissue. While astrocytes do not normally proliferate in healthy brains, they start to proliferate and increase their numbers around injured areas and minimize inflammation by surrounding the damaged neurons, other astrocytes, and inflammatory cells that have entered the damaged zone. Until now the mechanism that prompts astrocytes to proliferate in response to injury was unclear.

The research team focused on the fact that the astrocytes which proliferate around injured areas acquire characteristics similar to neural stem cells. The receptor tyrosine kinase Ror2, a cell surface protein, is highly expressed in neural stem cells in the developing brain. Normally the Ror2 gene is “switched off” within adult brains, but these findings showed that when the brain was injured, Ror2 was expressed in a certain population of the astrocytes around the injured area.

Ror2 is an important cell-surface protein that regulates the proliferation of neural stem cells, so the researchers proposed that Ror2 was regulating the proliferation of astrocytes around the injured areas. They tested this using model mice for which the Ror2 gene did not express in astrocytes. In these mice, the number of proliferating astrocytes after injury showed a remarkable decrease, and the density of astrocytes around the injury site was reduced. Using cultured astrocytes, the team analyzed the mechanism for activating the Ror2 gene, and ascertained that basic fibroblast growth factor (bFGF) can “switch on” Ror2 in some astrocytes.

This research showed that in injured brains, the astrocytes that show (high) expression of Ror2 induced by bFGF signal are primarily responsible for starting proliferation. bFGF is produced by different cell types, including neurons and astrocytes in the injury zone that have escaped damage. Among the astrocytes that received these bFGF signals around the injury zone, some express Ror2 and some do not. The fact that proliferating astrocytes after brain injury are reduced during aging raises the possibility that the population of astrocytes that can express Ror2 might decrease during aging, which could cause an increase in senile dementia. Researchers are aiming to clarify the mechanism that creates these different cell populations of astrocytes.

By artificially controlling the proliferation of astrocytes, in the future we can potentially minimize damage caused to neurons by brain injuries and establish a new treatment that encourages regeneration of damaged brain areas.

*Glial cell: a catch-all term for non-neuronal cells that belong to the nervous system. They support neurons in various roles.

9 years ago
Muscle-controlling Neurons Know When They Mess Up

Muscle-controlling Neurons Know When They Mess Up

Whether it is playing a piano sonata or acing a tennis serve, the brain needs to orchestrate precise, coordinated control over the body’s many muscles. Moreover, there needs to be some kind of feedback from the senses should any of those movements go wrong. Neurons that coordinate those movements, known as Purkinje cells, and ones that provide feedback when there is an error or unexpected sensation, known as climbing fibers, work in close concert to fine-tune motor control.   

A team of researchers from the University of Pennsylvania and Princeton University has now begun to unravel the decades-spanning paradox concerning how this feedback system works.

At the heart of this puzzle is the fact that while climbing fibers send signals to Purkinje cells when there is an error to report, they also fire spontaneously, about once a second. There did not seem to be any mechanism by which individual Purkinje cells could detect a legitimate error signal from within this deafening noise of random firing. 

Using a microscopy technique that allowed the researchers to directly visualize the chemical signaling occurring between the climbing fibers and Purkinje cells of live, active mice, the Penn team has for the first time shown that there is a measurable difference between “true” and “false” signals.

This knowledge will be fundamental to future studies of fine motor control, particularly with regards to how movements can be improved with practice. 

The research was conducted by Javier Medina, assistant professor in the Department of Psychology in Penn’s School of Arts and Sciences, and Farzaneh Najafi, a graduate student in the Department of Biology. They collaborated with postdoctoral fellow Andrea Giovannucci and associate professor Samuel S. H. Wang of Princeton University.

It was published in the journal Cell Reports.

The cerebellum is one of the brain’s motor control centers. It contains thousands of Purkinje cells, each of which collects information from elsewhere in the brain and funnels it down to the muscle-triggering motor neurons. Each Purkinje cell receives messages from a climbing fiber, a type of neuron that extends from the brain stem and sends feedback about the associated muscles. 

“Climbing fibers are not just sensory neurons, however,” Medina said. “What makes climbing fibers interesting is that they don’t just say, ‘Something touched my face’; They say, ‘Something touched my face when I wasn’t expecting it.’ This is something that our brains do all the time, which explains why you can’t tickle yourself. There’s part of your brain that’s already expecting the sensation that will come from moving your fingers. But if someone else does it, the brain can’t predict it in the same way and it is that unexpectedness that leads to the tickling sensation.”

Not only does the climbing fiber feedback system for unexpected sensations serve as an alert to potential danger — unstable footing, an unseen predator brushing by — it helps the brain improve when an intended action doesn’t go as planned.    

“The sensation of muscles that don’t move in the way the Purkinje cells direct them to also counts as unexpected, which is why some people call climbing fibers ‘error cells,’” Medina said. “When you mess up your tennis swing, they’re saying to the Purkinje cells, ‘Stop! Change! What you’re doing is not right!’ That’s where they help you learn how to correct your movements.

“When the Purkinje cells get these signals from climbing fibers, they change by adding or tweaking the strength of the connections coming in from the rest of the brain to their dendrites. And because the Purkinje cells are so closely connected to the motor neurons, the changes to those synapses are going to result in changes to the movements that Purkinje cell controls.”

This is a phenomenon known as neuroplasticity, and it is fundamental for learning new behaviors or improving on them. That new neural pathways form in response to error signals from the climbing fibers allows the cerebellum to send better instructions to motor neurons the next time the same action is attempted.

The paradox that faced neuroscientists was that these climbing fibers, like many other neurons, are spontaneously activated. About once every second, they send a signal to their corresponding Purkinje cell, whether or not there were any unexpected stimuli or errors to report.

“So if you’re the Purkinje cell,” Medina said, “how are you ever going to tell the difference between signals that are spontaneous, meaning you don’t need to change anything, and ones that really need to be paid attention to?”

Medina and his colleagues devised an experiment to test whether there was a measurable difference between legitimate and spontaneous signals from the climbing fibers. In their study, the researchers had mice walk on treadmills while their heads were kept stationary. This allowed the researchers to blow random puffs of air at their faces, causing them to blink, and to use a non-invasive microscopy technique to look at how the relevant Purkinje cells respond.

The technique, two-photon microscopy, uses an infrared laser and a reflective dye to look deep into living tissue, providing information on both structure and chemical composition. Neural signals are transmitted within neurons by changing calcium concentrations, so the researchers used this technique to measure the amount of calcium contained within the Purkinje cells in real time.

Because the random puffs of air were unexpected stimuli for the mice, the researchers could directly compare the differences between legitimate and spontaneous signals in the eyelid-related Purkinje cells that made the mice blink.

“What we have found is that the Purkinje cell fills with more calcium when its corresponding climbing fiber sends a signal associated with that kind of sensory input, rather than a spontaneous one,” Medina said. “This was a bit of a surprise for us because climbing fibers had been thought of as ‘all or nothing’ for more than 50 years now.”

The mechanism that allows individual Purkinje cells to differentiate between the two kinds of climbing fiber signals is an open question. These signals come in bursts, so the number and spacing of the electrical impulses from climbing fiber to Purkinje cell might be significant. Medina and his colleagues also suspect that another mechanism is at play: Purkinje cells might respond differently when a signal from a climbing fiber is synchronized with signals coming elsewhere from the brain.   

Whether either or both of these explanations are confirmed, the fact that individual Purkinje cells are able to distinguish when their corresponding muscle neurons encounter an error must be taken into account in future studies of fine motor control. This understanding could lead to new research into the fundamentals of neuroplasticity and learning.    

“Something that would be very useful for the brain is to have information not just about whether there was an error but how big the error was — whether the Purkinje cell needs to make a minor or major adjustment,” Medina said. “That sort of information would seem to be necessary for us to get very good at any kind of activity that requires precise control. Perhaps climbing fiber signals are not as ‘all-or-nothing’ as we all thought and can provide that sort of graded information”

10 years ago

Gave them freepythonhub:

Most of my current workflow involves some manner of data analysis / visualization / relatively light stats in an IPython notebook. A new source of data (Factset, if it helps) has well-developed interfaces for R and Matlab – both of which I’ve used extensively in the past, but barely at all in the last ~year.

My question is which – R or Matlab – is going to lend itself to more flexibility in terms of using data pulled through one of them in Python (at least in cases where switching back over to Python makes sense in the first place)? Would you rather have to use a combination of Python and R, or a combination of Python and Matlab?

Thanks!

submitted by josiahstevenson [link] [comment] [ link ]

Clearly R. By far more accessible. Open Source=free. R libraries grow fast in most areas of research. Reminds me what DEC did with the PDP-11. DEC gave them free to many selected Universities. Soon they became the standard. When grads got jobs eventually they opted for what they knew well. And demended it.

R API or Matlab API for integration with Python downstream? (x-post /r/pystats) [reddit]


Tags
  • r3ds3rpent
    r3ds3rpent reblogged this · 7 years ago
  • sci-rei
    sci-rei reblogged this · 7 years ago
  • lizzieheartsyou
    lizzieheartsyou reblogged this · 7 years ago
  • anactualamphibian
    anactualamphibian reblogged this · 7 years ago
  • mysticallion
    mysticallion reblogged this · 7 years ago
  • sammehwinchester
    sammehwinchester liked this · 7 years ago
  • inkxlenses
    inkxlenses reblogged this · 7 years ago
  • rigormortisbabyd0ll
    rigormortisbabyd0ll liked this · 8 years ago
  • chipspace
    chipspace reblogged this · 8 years ago
  • chipspace
    chipspace liked this · 8 years ago
  • ondinefinn
    ondinefinn reblogged this · 8 years ago
  • ondinefinn
    ondinefinn liked this · 8 years ago
  • love-allmine
    love-allmine liked this · 8 years ago
  • gromet88
    gromet88 liked this · 8 years ago
  • thatrenaissanceguy
    thatrenaissanceguy liked this · 8 years ago
  • sadlyangryangrilysad
    sadlyangryangrilysad liked this · 8 years ago
  • r3ds3rpent
    r3ds3rpent reblogged this · 8 years ago
  • r3ds3rpent
    r3ds3rpent liked this · 8 years ago
  • clearlyjoyfulstudentstuff
    clearlyjoyfulstudentstuff liked this · 8 years ago
  • tumblerhasfun
    tumblerhasfun liked this · 8 years ago
  • somebrowngirl
    somebrowngirl liked this · 8 years ago
  • dropedin-blog
    dropedin-blog liked this · 8 years ago
  • mangosyfresas
    mangosyfresas liked this · 8 years ago
  • crevby
    crevby liked this · 8 years ago
  • nurnielfa
    nurnielfa liked this · 8 years ago
  • the-murmuration-project
    the-murmuration-project reblogged this · 8 years ago
  • charsand-blog
    charsand-blog reblogged this · 8 years ago
  • charsand-blog
    charsand-blog liked this · 8 years ago
  • dancezwithwolvez
    dancezwithwolvez liked this · 8 years ago
  • juano83
    juano83 liked this · 8 years ago
  • varadhanna
    varadhanna liked this · 8 years ago
  • yo-yo-inferno-blog
    yo-yo-inferno-blog reblogged this · 8 years ago
  • forged-not-made
    forged-not-made liked this · 8 years ago
  • jimmychooluvr-blog
    jimmychooluvr-blog liked this · 8 years ago
  • aninhaelric
    aninhaelric liked this · 8 years ago
  • between-the-lamp-post
    between-the-lamp-post liked this · 8 years ago
  • usurpthecity
    usurpthecity liked this · 8 years ago
r3ds3rpent - Kode, Transistors and Spirit
Kode, Transistors and Spirit

Machine Learning, Big Data, Code, R, Python, Arduino, Electronics, robotics, Zen, Native spirituality and few other matters.

107 posts

Explore Tumblr Blog
Search Through Tumblr Tags