Sniffing Out Smell

    The premiere of the movie Scent of Mystery in 1960 marked a singular event in the annals of cinema: the first, and last, motion picture debut “in glorious Smell-O-Vision.”

    Hoping to wow moviegoers with a dynamic olfactory experience alongside the familiar spectacles of sight and sound, select theaters were outfitted with a Rube Goldberg-esque device that piped different scents directly to seats.

    Audiences and critics quickly concluded that the experience stunk. Fraught with technical issues, Smell-O-Vision was panned and became a running gag that holds a unique place in entertainment history.

    The flop of Smell-O-Vision, however, failed to deter entrepreneurs from continuing to chase the dream of delivering smells to consumers, particularly in recent years, through digital scent technologies.

    Such efforts have generated news headlines but scant success, due in part to a limited understanding of how the brain translates odor chemistry into perceptions of smell—a phenomenon that in many ways remains opaque to scientists.

    A study by neurobiologists at Harvard Medical School now provides new insights into the mystery of scent. Reporting in Nature, researchers describe for the first time how relationships between different odors are encoded in the olfactory cortex, the region of brain responsible for processing smell.

    By delivering odors with carefully selected molecular structures and analyzing neural activity in awake mice, the team showed that neuronal representations of smell in the cortex reflect chemical similarities between odors, thus enabling scents to be placed into categories by the brain. Moreover, these representations can be rewired by sensory experiences.

    The findings suggest a neurobiological mechanism that may explain why individuals have common but highly personalized experiences with smell.

    “All of us share a common frame of reference with smells. You and I both think lemon and lime smell similar and agree that they smell different from pizza, but until now, we didn’t know how the brain organizes that kind of information,” said senior study author Sandeep Robert Datta, associate professor of neurobiology in the Blavatnik Institute at HMS.

    The results open new avenues of study to better understand how the brain transforms information about odor chemistry into the perception of smell.

    “This is the first demonstration of how the olfactory cortex encodes information about the very thing that it’s responsible for, which is odor chemistry, the fundamental sensory cues of olfaction,” Datta said.

    Computing odor

    The sense of smell allows animals to identify the chemical nature of the world around them. Sensory neurons in the nose detect odor molecules and relay signals to the olfactory bulb, a structure in the forebrain where initial odor processing occurs. The olfactory bulb primarily transmits information to the piriform cortex, the main structure of the olfactory cortex, for more comprehensive processing.

    Unlike light or sound, stimuli easily controlled by tweaking characteristics such as frequency and wavelength, it is difficult to probe how the brain builds neural representations of the small molecules that transmit odor. Often, subtle chemical changes—a few carbon atoms here or oxygen atoms there—can lead to significant differences in smell perception.

    Datta, along with study first author Stan Pashkovski, research fellow in neurobiology at HMS, and colleagues approached this challenge by focusing on the question of how the brain identifies related but distinct odors.

    “The fact that we all think a lemon and lime smell similar means that their chemical makeup must somehow evoke similar or related neural representations in our brains,” Datta said.

    To investigate, the researchers developed an approach to quantitatively compare odor chemicals analogous to how differences in wavelength, for example, can be used to quantitatively compare colors of light.

    They used machine learning to look at thousands of chemical structures known to have odors and analyzed thousands of different features for each structure, such as the number of atoms, molecular weight, electrochemical properties and more. Together, these data allowed the researchers to systematically compute how similar or different any odor was relative to another.

    From this library, the team designed three sets of odors: a set with high diversity; one with intermediate diversity, with odors divided into related clusters; and one of low diversity, where structures varied only by incremental increases in carbon-chain length.

    They then exposed mice to various combinations of odors from the different sets and used multiphoton microscopy to image patterns of neural activity in the piriform cortex and olfactory bulb.

    Smell prediction

    The experiments revealed that similarities in odor chemistry were mirrored by similarities in neural activity. Related odors produced correlated neuronal patterns in both the piriform cortex and olfactory bulb, as measured by overlaps in neuron activity. Weakly related odors, by contrast, produced weakly related activity patterns.

    In the cortex, related odors led to more strongly clustered patterns of neural activity compared with patterns in the olfactory bulb. This observation held true across individual mice. Cortical representations of odor relationships were so well-correlated that they could be used to predict the identity of a held-out odor in one mouse based on measurements made in a different mouse.

    Additional analyses identified a diverse array of chemical features, such as molecular weight and certain electrochemical properties, that were linked to patterns of neural activity. Information gleaned from these features was robust enough to predict cortical responses to an odor in one animal based on experiments with a separate set of odors in a different animal.

    The researchers also found that these neural representations were flexible. Mice were repeatedly given a mixture of two odors, and over time, the corresponding neural patterns of these odors in the cortex became more strongly correlated. This occurred even when the two odors had dissimilar chemical structures.

    The ability of the cortex to adapt was generated in part by networks of neurons that selectively reshape odor relationships. When the normal activity of these networks was blocked, the cortex encoded smells more like the olfactory bulb.

    “We presented two odors as if they’re from the same source and observed that the brain can rearrange itself to reflect passive olfactory experiences,” Datta said.

    Part of the reason why things like lemon and lime smell alike, he added, is likely because animals of the same species have similar genomes and therefore similarities in smell perception. But each individual has personalized perceptions as well.

    “The plasticity of the cortex may help explain why smell is on one hand invariant between individuals, and yet customizable depending on our unique experiences,” Datta said.

    Together, the results of the study demonstrate for the first time how the brain encodes relationships between odors. In comparison to the relatively well-understood visual and auditory cortices, it is still unclear how the olfactory cortex converts information about odor chemistry into the perception of smell.

    Identifying how the olfactory cortex maps similar odors now provides new insights that inform efforts to understand and potentially control the sense of smell, according to the authors.

    “We don’t fully understand how chemistries translate to perception yet,” Datta said. “There’s no computer algorithm or machine that will take a chemical structure and tell us what that chemical will smell like.”

    “To actually build that machine and to be able to someday create a controllable, virtual olfactory world for a person, we need to understand how the brain encodes information about smells,” Datta said. “We hope our findings are a step down that path.”

    Nanocrystals With Unique Surface Texture That Eradicates Bacteria Biofilm

    The COVID-19 pandemic is raising fears of new pathogens such as new viruses or drug-resistant bacteria. To this, a Korean research team has recently drawn attention for developing the technology for removing antibiotic-resistant bacteria by controlling the surface texture of nanomaterials.

    A joint research team from POSTECH and UNIST has introduced mixed-FeCo-oxide-based surface-textured nanostructures (MTex) as highly efficient magneto-catalytic platform in the international journal Nano Letters. The team consisted of professors In Su Lee and Amit Kumar with Dr. Nitee Kumari of POSTECH’s Department of Chemistry and Professor Yoon-Kyung Cho and Dr. Sumit Kumar of UNIST’s Department of Biomedical Engineering.

    Read more.

    A remote control for neurons

    A novel material for controlling human neuron cells could deepen our understanding of cell interactions and enable new therapies in medicine.

    A team led by researchers at Carnegie Mellon University has created a new technology that enhances scientists’ ability to communicate with neural cells using light. Tzahi Cohen-Karni, associate professor of biomedical engineering and materials science and engineering, led a team that synthesized three-dimensional fuzzy graphene on a nanowire template to create a superior material for photothermally stimulating cells. NW-templated three-dimensional (3D) fuzzy graphene (NT-3DFG) enables remote optical stimulation without need for genetic modification and uses orders of magnitude less energy than available materials, preventing cellular stress.

    Graphene is abundant, cheap, and biocompatible. Cohen-Karni’s lab has been working with graphene for several years, developing a technique of synthesizing the material in 3D topologies that he’s labeled “fuzzy” graphene. By growing two-dimensional (2D) graphene flakes out-of-plane on a silicon nanowire structure, they’re able to create a 3D structure with broadband optical absorption and unparalleled photothermal efficiency.

    These properties make it ideal for cellular electrophysiology modulation using light through the optocapacitive effect. The optocapacitive effect alters the cell membrane capacitance due to rapidly applied light pulses. NT-3DFG can be readily made in suspension, allowing the study of cell signaling within and between both 2D cell systems and 3D, like human cell-based organoids.


    (Image caption: Graphene flakes are grown on silicon nanowires to achieve superior conductivity. Credit: College of Engineering)

    Systems like these are not only crucial to understanding how cells signal and interact with each other, but also hold great potential for the development of new, therapeutic interventions. Exploration into these opportunities, however, has been limited by the risk of cellular stress that existing optical remote-control technologies present. The use of NT-3DFG eliminates this risk by using significantly less energy, on a scale of 1-2 orders of magnitude less. Its biocompatible surface is easy to modify chemically, making it versatile for use with different cell types and environments. Using NT-3DFG, photothermal stimulation treatments could be developed for motor recruitment to induce muscle activation or could direct tissue development in an organoid system.


    (Image caption: Nanowires are able to stimulate neurons from outside the cell membrane. Credit: College of Engineering)

    “This is an outstanding collaborative work of experts from multiple fields, including neuroscience through Pitt and UChicago, and photonics and materials science through UNC and CMU,” said Cohen-Karni. “The developed technology will allow us to interact with either engineered tissues or with nerve or muscle tissue in vivo. This will allow us to control and affect tissue functionality using light remotely with high precision and low needed energies.”

    Additional contributions to the project were made by Maysam Chamanzar, assistant professor of electrical and computer engineering. His team’s core expertise in photonics and neurotechnologies assisted in developing the much-needed tools to allow both the characterization of the unique hybrid-nanomaterials, and in stimulating the cells while optically recording their activity.


    (Image caption: Neurons respond to optical stimulus from NT-3DFG nanostructures. Credit: College of Engineering)

    “The broadband absorption of these 3D nanomaterials enabled us to use light at wavelengths that can penetrate deep into the tissue to remotely excite nerve cells. This method can be used in a whole gamut of applications, from designing non-invasive therapeutics to basic scientific studies,” said Chamanzar.

    The team’s findings are significant both for our understanding of cell interactions and the development of therapies that harness the potential of the human body’s own cells. Nanostructures created using NT-3DFG may have a major impact on the future of human biology and medicine.

    Nikola Tesla’s Ether Theory

    By J. J. J.

    The ether is considered a universal medium consisting of a primary substance, attenuated beyond conception, which fills all space and connects all matter. This medium, or field of force, is responsible for action at a distance—a concept where an object can interact with other objects even though they are separated in space. This idea still baffles today’s physicists, but was understood by Nikola Tesla long before Albert Einstein coined his “spooky action at a distance”.

    Before I get into Tesla’s explanation of the ether, I must first recall the famous 1887 Michelson-Morley experiment, because I know some readers will immediately bring it up. The experiment was intended to detect the ether using light beams and mirrors to record the speed of light through the ether relative to the Earth’s movement around the Sun; however, the two scientists failed to detect the ether and it became one of the most famous failed experiments in history. Surprisingly though, the experimenters did not account for the fact that the speed of light was relative to the observer moving with the apparatus, which led to the null effect. What it did, rather, was prove that the average velocity of light for a round trip between a beam splitter and a mirror was independent of motion through space. Either way, physicists agreed that by its nature, the ether cannot be detected and it is unnecessary for explaining how light travels through space.  

    It was Heinrich Hertz, who during the same time as the Michelson-Morley experiment, demonstrated the notion of action at a distance proving the existence of electromagnetic waves first predicted by James Clerk Maxwell in 1864. Since these waves travel across space, there must be a medium carrying the waves. Like Maxwell, Hertz postulated that ether was structureless beyond conception, and yet solid and possessed a rigidity incomparably greater than the hardest steel. Electromagnetic waves were then believed to be transverse waves (waves that vibrate at ninety degrees angles).

    In the early 1890s, Nikola Tesla repeated Hertz’s experiments with a much improved and a far more powerful apparatus, coming to the conclusion that what Hertz observed were longitudinal waves in a gaseous medium propagated by alternate compression and expansion. After discovering these results, Tesla declared that light, and other electromagnetic waves, are not transverse waves (a theory still believed today in conventional physics), but instead are a longitudinal disturbance in the ether involving alternate compressions and rarefactions. In his own words, “light can be nothing else than a sound wave in the ether.  Since light has such a constancy of velocity, light can only be explained by assuming that it is dependent solely on the physical properties of the medium, especially density and elastic force.” It wasn’t until after Nikola Tesla met with Hertz and explained his results that Hertz then changed his views on the ether and accepted that it was a gaseous medium rather than a stationary one.

    Believing that the ether was one of the most important results of modern scientific research, Tesla refused to abandon it because in his mind the ether was an important key to understanding how electrical energy could travel through space without wires. He displayed this phenomenon in numerous experiments and lectures throughout the 1890s.

    It wasn’t until 1896 when Tesla finally obtained experimental proof of the ether. He invented a new form of vacuum tube which could be charged to any high potential and operated with pressures up to 4,000,000 volts. In 1929, Tesla spoke of these vacuum tubes saying, “One of the first striking observations made with my tubes was that a purplish glow for several feet around the end of the tube was formed, and I readily ascertained that it was due to the escape of the charges of the particles as soon as they passed out into the air; for it was only in a nearly perfect vacuum that these charges could be confined to them. The coronal discharge proved that there must be a medium besides air in the space, composed of particles immeasurably smaller than those of air, as otherwise such a discharge would not be possible. On further investigation I found that this gas was so light that a volume equal to that of the earth would weigh only about one-twentieth of a pound.”

    To explain the density of the ether, Tesla referred to William Thomson’s equations. In 1932, Tesla said, “Its density has been first estimated by Lord Kelvin and conformably to his finding a column of one square centimeter cross section and of a length such that light, traveling at a rate of three hundred thousand kilometers per second, would require one year to transverse it, should weigh 4.8 grams. This is just about the weight of a prism of ordinary glass of the same cross section and two centimeters length which, therefore, may be assumed as the equivalent of the ether column in absorption. A column of the ether one thousand times longer would thus absorb as much light as twenty meters of glass. However, there are suns at distances of many thousands of light years and it is evident that virtually no light from them can reach the earth. But if these suns emit rays immensely more penetrative than those of light they will be slightly dimmed and so the aggregate amount of radiations pouring upon the earth from all sides will be overwhelmingly greater than that supplied to it by our luminary. If light and heat rays would be as penetrative as the cosmic, so fierce would be the perpetual glare and so scorching the heat that life on this and other planets could not exist.”

    According to Nikola Tesla’s ether theory, all matter in the universe is metamorphous from the ether. When the ether is set in motion, it becomes gross matter. All matter, then, is merely ether in motion. In 1900, Tesla said, “By being set in movement, ether becomes matter perceptible to our senses; the movement arrested, the primary substance reverts to its normal state and becomes imperceptible. If this theory of the constitution of matter is not merely a beautiful conception, which in its essence is contained in the old philosophy of the Vedas, but a physical truth, then if the ether whirl or atom be shattered by impact or slowed down and arrested by cold, any material, whatever it be, would vanish into seeming nothingness, and, conversely, if the ether be set in movement by some force, matter would again form. Thus, by the help of a refrigerating machine or other means for arresting ether movement and an electrical or other force of great intensity for forming ether whirls, it appears possible for man to annihilate or to create at his will all we are able to perceive by our tactile sense.”

    In summary, Tesla experimented, and proved his theories using the scientific method. His methods were far more superior to other physicists of his time, because he had the motors and transformers invented by himself to help with his experiments. These include the induction motor, his Telsa coil, and many more apparatuses. In the future the ether may be referred to as dark matter, the force etc., but Nikola Tesla’s ether theory will be proven true in years to come.

    Scientists create polymers to detect banned substances in wastewater

    Molecularly imprinted polymers, which have been created with the participation of a SUSU scientist, have become a base for a unique sensor that detects banned substances in wastewater. Police forces in European countries, where the problem of drug production is particularly acute, have shown interest in this development. The results of the research on creating these polymers have been published in a first quartile journal, Biosensors and Bioelectronics.

    An international team, which included SUSU Senior Research Fellow Natalia Beloglazova, were set a task of detecting the traces of drugs in wastewater created in illicit drug laboratories. The scientists have designed an automated sensor system that is now the base of the Micromole project, a part of the Horizon 2020 program for research and innovation of the European Union.

    The development consists of a system of sensors intended for continuous monitoring of wastewater flow. The system allows authorities to search for laboratories creating synthetic amphetamine drugs, which produce waste contains the traces of production. The majority of the supplies to the illicit market come from European countries, and special attention is paid to amphetamine here.

    Read more.

    Metals and insulators are the yin and yang of physics, their respective material properties strictly dictated by their electrons’ mobility - metals should conduct electrons freely, while insulators keep them in place.

    So when physicists from Princeton University in the US found a quantum quirk of metals bouncing around inside an insulating compound, they were lost for an explanation.

    We’ll need to wait on further studies to find out exactly what’s going on. But one tantalising possibility is that a previously unseen particle is at work, one that represents neutral ground in electron behaviour. They’re calling it a ‘neutral fermion’.

    “This came as a complete surprise,says physicist Sanfeng Wu from Princeton University in the US.

    “We asked ourselves, 'What’s going on here?’ We don’t fully understand it yet.”

    Continue Reading.

    How hearing loss in old age affects the brain

    If your hearing deteriorates in old age, the risk of dementia and cognitive decline increases. So far, it hasn’t been clear why. A team of neuroscientists at Ruhr-Universität Bochum (RUB) in Germany examined what happens in the brain when hearing gradually deteriorates: key areas of the brain are reorganized, and this affects memory. The results are published online in the journal “Cerebral Cortex”.

    Daniela Beckmann, Mirko Feldmann, Olena Shchyglo and Professor Denise Manahan-Vaughan from the Department of Neurophysiology of the Medical Faculty worked together for the study.

    When sensory perception fades

    The researchers studied the brain of mice that exhibit hereditary hearing loss, similar to age related hearing loss in humans. The scientists analysed the density of neurotransmitter receptors in the brain that are crucial for memory formation. They also researched the extent to which information storage in the brain’s most important memory organ, the hippocampus, was affected.

    Adaptability of the brain suffers

    Memory is enabled by a process called synaptic plasticity. In the hippocampus, synaptic plasticity was chronically impaired by progressive hearing loss. The distribution and density of neurotransmitter receptors in sensory and memory regions of the brain also changed constantly. The stronger the hearing impairment, the poorer were both synaptic plasticity and memory ability.

    “Our results provide new insights into the putative cause of the relationship between cognitive decline and age-related hearing loss in humans,” said Denise Manahan-Vaughan. “We believe that the constant changes in neurotransmitter receptor expression caused by progressive hearing loss create shifting sands at the level of sensory information processing that prevent the hippocampus from working effectively”, she adds.

    How Cocaine Works in Your Brain

    Dopamine, euphoria, stress & changing gene expression

    Cocaine is probably the world’s most popular hard drug. Produced mostly in Colombia and consumed by the rest of the world, especially the US and UK, the cocaine market has an estimated global value of $120billion.

    Where does the characteristic euphoric & energetic high come from though? And why can it be so addictive?

    Cocaine’s neural mechanism is fascinating and affects more parts of the brain than we first realised.

    1. Euphoria and Confidence — Dopamine

    Cocaine’s primary effect is in how it affects our brain’s dopamine supply. Dopamine is a well-known chemical; it’s responsible for pleasure and is released when we do things like eat or have sex.

    It’s involved in motivation and reward-based behaviour; when we do something our brains deem good for our survival it rewards us with a hit of dopamine. This is so we associate the behaviour with pleasure, motivating us to do it again.

    This particular mechanism takes place in the specifically in the nucleus accumbens (NAc), located in the mesolimbic pathway — the brains reward centre — where the concentration of dopamine receptors is high.

    Your brain cells have two distinct ends; one end sends signals and the other receives them. The sending and receiving ends of separate cells face each other but don’t touch. The tiny gap between them is called the synapse, or synpatic gap.

    Signals are sent between brain cells via chemical messengers called neurotransmitters, like dopamine. Dopamine is released from the sending cell into the synapse and activates receptors in the receiving cell, like a lock in a key.

    Once dopamine activates its receptor a pleasure signal is sent down the receiving cell as a result. The dopamine then pops out of the receptor and flows back into the synapse and up into the sending cell once again — this process is called reuptake.

    Cocaine hijacks this system by flooding the synapse with dopamine and then blocking reuptake. The practical effect is a load more dopamine molecules swimming around in your synapses and constantly activating your receptors. This sends far more pleasure signals through your brain cells than normal.

    In addition, cocaine also affects the immune system’s glial cells. It does this at a specific receptor which results in an inflammatory response in the brain, further exiciting neurons and releasing even more dopamine.

    This is where the pleasure and euphoria part of the cocaine high come from.

    2. Energy and Heart Stress — Norepinephrine

    Cocaine also affects your norepinephrine receptors, much like it affects your dopamine receptors, but in less intensity.

    Norepinephrine (also called noradrenaline) is a hormone associated with your fight or flight response in your sympathetic nervous system. It increases your heart rate, forces blood to your muscles and kicks up your blood sugar.

    The staple energertic, ‘invincible’ high of cocaine comes from the mixture of dopamine and norepinephrine activation in the brain, which is why a user can feel energetic and euphoric at the same time.

    However, norepinephrine also has an effect on your veins. As well as increasing your heart-rate, cocaine also narrows your blood vessels. An increased heart-rate coupled with decreased space in your vessels for it to flow causes a sharp rise in blood pressure. Much like the pressure of the water coming out of a hose sharply rises if you increase the water supply and squeeze the pipe.

    This places stress on the heart and vascular system, especially in repeated cocaine use.

    3. Stress — The Ventral Tegmental Area

    The mesolimbic pathway, the area of the brain where cocaine most affects the dopamine supply, originates in a region called the ventral tegmental area.

    Apart from norepinephrine cocaine doesn’t seem to affect any other stress hormones, but the ventral tegmental area seems to be a critical integration centre in the brain.

    It relays information about stress, pleasure and other cues to the rest of the brain. Cocaine’s activation of the mesolimbic pathway can also have the effect of pushing stress cues to the rest of the brain, since it originates in a key integration centre.

    The possible stress effect of cocaine will mostly be overshadowed by its pleasurable and energetic effects, but can become more apparent with constant and repeated use.

    4. Addiction — Gene Expression

    This is probably the most fascinating part of cocaine’s neural effect. Cocaine has the potential to be horribly addictive, but it could never be adequately explained by just its effect on the brain’s dopamine system.

    Recent research has found cocaine has the potential of changing gene expression. That means it can change how your genes react and create cells.

    We each have roughly 30,000 genes. These genes exist in every cell in our bodies and define what cells are, how they form and how much work they can handle. Every cell has the capacity to change its level of activity based on the demands we place on it.

    For example, say you’re lifting weights. The more you use your muscles the more your muscle cells will adapt to the demands you place on them. The more weights you lift, the more work the cells will be able to deal with and the stronger your muscles get.

    They do this via gene expression. Certain genes within each cell deal with the capacity of that cell to output or reflect a certain action, like a dial.

    Cocaine can alter the expression of numerous genes within the part of the brain it affects the most; the Nucleus Accumbens (NAc), located in the mesolimbic pathway — the brain’s reward centre.

    The most interesting of these is the protein ΔFosB. It’s a pace-setting chemical and a gene transcription factor. This means it’s a chemical which sets gene expression. Cocaine causes the build-up of ΔFosB in cells in the NAc.

    Experiments with mice showed that high ΔFosB levels led to a change in their cell’s gene expression, making them far more prone to addictive behaviours. Conversely, the lack of ΔFosB led to the opposite effect.

    ΔFosB cranks up the ‘addictive behaviour’ dial in cells. ΔFosB naturally lasts about 6–8 weeks before it breaks down, so repeated and regular use of cocaine builds up high levels of ΔFosB in cells.

    When the cells replicate they do so with the new gene expression, meaning new cells will have the same addictive dial cranked all the way up.

    This addictive behaviour default originates in the part of the brain responsible for reward and positive association. This causes more addictive patterns in anything the person does, since all reward behaviour flows through the same area of the brain cocaine affects.

    Regular cocaine use can actually change your genes and give you more of an addictive personality.

    5. Long Terms Personality Changes — Neural Adaptation

    Repeated cocaine use has other potential long term personality changes, other than changing the expression of your genes.

    The brain is an amazingly adaptable machine, the name for which is plasticity. It has an idea of what its ideal levels of dopamine should be, so when there’s too much flowing around in the synapses for too long the brain can adapt in response. It can either cut down its dopamine supply or remove dopamine receptors from the receiving cells.

    The brain requires a certain level of dopamine activation to maintain mood and proper function. Lack of dopamine or of dopamine receptors are the main factors that can give users physical withdrawal symptoms. There simply isn’t enough dopamine activation to reward positive behaviours and maintain mood.

    This is also part of where the comedown of a cocaine hit comes from. When the drug wears off the brain breaks down the dopamine used. This means there’s less of it after a cocaine hit than there was before, and therefore less dopamine activation. Your brain needs to build its supplies back up to return to a normal level of activation. How much cocaine a person uses is directly related to how intense the high is, how much dopamine is hijacked by the drug and consequently how much dopamine is lost afterward.

    The other fascinating reason cocaine could cause long-term neural changes is again from its stimulation of ΔFosB creation. In addition to dialling addictive behaviours ΔFosB can also increase nerve cell growth (through a gene called CDK5). Elevated levels can lead to nerve cells in the NAc growing more dendrites; the parts of the cell that pick up brain signals. It’s like having a large TV aerial ; the fact it’s bigger means it can pick more signal.

    More dendrites in a cell means they can pick up more signals and therefore other parts of the brain can have a greater effect on those cells. Since those cells are in the part of the brain which deal with reward-based behaviour, it can bring about long term changes in how the individual perceives many things like memory, emotion and learning, all related to reward-based behaviour.

    Cocaine is a fascinating, powerful and potentially very dangerous substance. It can cause long-term changes to the brain, make a user’s personality far more addictive and can negatively affect the heart and vascular system. Research continues on this most popular of illicit drugs and we’ll find out more as time goes on.

    ByRajeet Singh(Medium). Image: Alexander Krivitskiy from Pexels.

    - How Energy Works - Part One -

    Some things to keep in mind:

  • The idea that nothing exists until it is interacted with is false.
  • Everything has it’s own vibration with or without interaction from outside sources.
  • Everything has energy that controls itself.
  • We control our own energy.
  • The objects around us control their own energy.
  • To bring something into your life, you have to be an exact vibrational match for it. And, because energies are always shifting/never stagnant, to keep something in your life, you have to keep being an exact vibrational match for it. 

    This means that there has to be some sort of quantum convergence or entanglement in your respective energies.

  • Every single thing in the universe is a fractal or mirror of source. 
  • The objects around you are fractals.
  • The people around you are mirrors.
  • You are the mirror.
  • The people and events in your life are direct manifestations and mirrors of your subconscious thoughts and energy.

    The things you think about others are the things you think about yourself. 

    To choose to believe that outside forces can have any effect on other people’s lives or energy without their expressly given consent is to believe that outside forces can have that same amount of sway on your life without your expressly given consent. 

    This is a form of giving away your personal power. It is you choosing to create a world or reality in which these things are true. 

    (Image caption: A region of the mouse brain known as the cNST (colored yellow at top) responded to the presence of sugar, even if it was infused directly into the gut. Credit: Tan et al./Nature 2020)

    A Gut-to-Brain Circuit Drives Sugar Preference and May Explain Sugar Cravings


    The sensation of sweetness starts on the tongue, but sugar molecules also trip sensors in the gut that directly signal the brain. This could explain why artificial sweeteners fail to satisfy the insatiable craving for sugar.

    A little extra sugar can make us crave just about anything, from cookies to condiments to coffee smothered in whipped cream. But its sweetness doesn’t fully explain our desire. Instead, new research shows this magic molecule has a back channel to the brain.

    Like other sweet-tasting things, sugar triggers specialized taste buds on the tongue. But it also switches on an entirely separate neurological pathway – one that begins in the gut, Howard Hughes Medical Institute Investigator Charles Zuker and colleagues report on April 15, 2020 in the journal Nature.

    In the intestines, signals heralding sugar’s arrival travel to the brain, where they nurture an appetite for more, the team’s experiments with mice showed. This gut-to-brain pathway appears picky, responding only to sugar molecules – not artificial sweeteners.

    Scientists already knew sugar exerted unique control over the brain. A 2008 study, for example, showed that mice without the ability to taste sweetness can still prefer sugar. Zuker’s team’s discovery of a sugar-sensing pathway helps explain why sugar is special – and points to ways we might quell our insatiable appetite for it.

    “We need to separate the concepts of sweet and sugar,” says Zuker, a neuroscientist at Columbia University. “Sweet is liking, sugar is wanting. This new work reveals the neural basis for sugar preference.”

    Sweet stuff

    The term sugar is a catchall, encompassing a number of substances our bodies use as fuel. Eating sugar activates the brain’s reward system, making humans and mice alike feel good. However, in a world where refined sugar is plentiful, this deeply ingrained appetite can run amok. The average American’s annual sugar intake has skyrocketed from less than 10 pounds in the late 1800s to more than 100 pounds today. That increase has come at a cost: Studies have linked excess sugar consumption to numerous health problems, including obesity and type 2 diabetes.

    Previously, Zuker’s work showed that sugar and artificial sweeteners switch on the same taste-sensing system. Once in the mouth, these molecules activate the sweet-taste receptors on taste buds, initiating signals that travel to the part of the brain that processes sweetness.

    But sugar affects behavior in a way that artificial sweetener doesn’t. Zuker’s team ran a test pitting sugar against the sweetener Acesulfame K, which is used in diet soda, sweetening packets, and other products. Offered water with the sweetener or with sugar, mice at first drank both, but within two days switched almost exclusively to sugar water. “We reasoned this unquenchable motivation that the animal has for consuming sugar, rather than sweetness, might have a neural basis,” Zuker says.

    Sugar circuit

    By visualizing brain activity when the rodents consumed sugar versus artificial sweetener or water, the researchers for the first time identified the brain region that responds solely to sugar: the caudal nucleus of the solitary tract (cNST). Found in the brain stem, separate from where mice process taste, the cNST is a hub for information about the state of the body.  

    The path to the cNST, the team determined, begins in the lining of the intestine. There, sensor molecules spark a signal that travels via the vagus nerve, which provides a direct line of information from the intestines to the brain.

    This gut-to-brain circuit favors one form of sugar: glucose and similar molecules. It ignores artificial sweeteners — perhaps explaining why these additives can’t seem to fully replicate sugar’s appeal. It also overlooks some other types of sugar, most notably fructose, which is found in fruit. Glucose is a source of energy for all living things. That could explain why the system’s specificity for the molecule evolved, say study lead authors Hwei Ee Tan and Alexander Sisti, who are graduate students in Zuker’s lab.  

    Previously, scientists speculated that sugar’s energy content, or calories, explained its appeal, since many artificial sweeteners lack calories. However, Zuker’s study showed this is not the case, since calorie-free, glucose-like molecules can also activate the gut-to-brain sugar-sensing pathway.

    To better understand how the brain’s strong preference for sugar develops, his group is now studying the connections between this gut-brain sugar circuit and other brain systems, like those involved in reward, feeding, and emotions. Although his studies are in mice, Zuker believes that essentially the same glucose-sensing pathway exists in humans.

    “Uncovering this circuit helps explain how sugar directly impacts our brain to drive consumption,” he says. “It also exposes new potential targets and opportunities for strategies to help curtail our insatiable appetite for sugar.”

    Decades Old Mystery Solved: A “New Kind of Electrons”

    Why do certain materials emit electrons with a very specific energy? This has been a mystery for decades — scientists at TU Wien have found an answer.

    It is something quite common in physics: electrons leave a certain material, they fly away and then they are measured. Some materials emit electrons, when they are irradiated with light. These electrons are then called “photoelectrons.” In materials research, so-called “Auger electrons” also play an important role — they can be emitted by atoms if an electron is first removed from one of the inner electron shells. But now scientists at TU Wien (Vienna) have succeeded in explaining a completely different type of electron emission, which can occur in carbon materials such as graphite. This electron emission had been known for about 50 years, but its cause was still unclear.

    Strange electrons without explanation

    “Many researchers have already wondered about this,” says Prof. Wolfgang Werner from the Institute of Applied Physics. “There are materials that consist of atomic layers that are held together only by weak Van der Waals forces, for example graphite. And it was discovered that this type of graphite emits very specific electrons, which all have exactly the same energy, namely 3.7 electron volts.”

    Read more.

    No-nonsense quantum field theory by Jakob Schwichtenberg.

    I started reading this book a week and a half ago and I can't recommend it enough! I'm usually not the best at thinking in the way that physicists do and my last physics class was half a decade ago but this book is so gentle and readable and nice. I'm kind of in awe.

    It's surprisingly not too bad with requirements either. I'm about a third of the way and thus far its used only calculus (up to partial derivatives) and basic linear algebra.

    Honestly it's making feel a lot better. Usually I get bogged down and lose motivation but somehow not this time.