Popular Posts

Popular Posts

Pages

Total Pageviews

Wednesday, February 29, 2012

Neolithic Revolution..Formation of the AGRICULTURAL ERA


Neolithic Revolution


The Neolithic Revolution was the first agricultural revolution. It was the wide-scale transition of many human cultures from a lifestyle of hunting and gathering toagriculture and settlement. Archaeological data indicates that various forms of plants and animal domestication evolved independently in six separate locations worldwide circa 10,000–7000 years BP (8,000–5,000 BC). The earliest known evidence exists in the tropical and subtropical areas of southwestern/southern Asia.[1]
However, the Neolithic Revolution involved far more than the adoption of a limited set of food-producing techniques. During the next millennia it would transform the small and mobile groups of hunter-gatherers that had hitherto dominated human history into sedentary societies based in built-up villages and towns, which radically modified their natural environment by means of specialized food-crop cultivation (e.g., irrigation and food storage technologies) that allowed extensive surplus food production. These developments provided the basis for high population density settlements, specialized and complex labor diversificationtrading economies, the development of non-portable artarchitecture, and culture, centralized administrations and political structures, hierarchical ideologies, and depersonalized systems of knowledge (e.g., property regimes and writing). The first full-blown manifestation of the entire Neolithic complex is seen in the Middle Eastern Sumerian cities (ca.3,500 BC), whose emergence also inaugurates the end of the prehistoric Neolithic period.
The relationship of the above-mentioned Neolithic characteristics to the onset of agriculture, their sequence of emergence, and empirical relation to each other at various Neolithic sites remains the subject of academic debate, and seems to vary from place to place, rather than being the outcome of universal laws of social evolution.[2][3]

2002..Nobel Prize in Medicine and Physiooggy


Sydney Brenner

H. Robert Horvitz

John E. Sulston

The Nobel Prize in Physiology or Medicine 2002 was awarded jointly to Sydney Brenner, H. Robert Horvitz and John E. Sulston "for their discoveries concerning 'genetic regulation of organ development and programmed cell death'"

Genetic regulation of programmed cell death

Sir John Sulston of the Wellcome Trust Sanger Institute developed techniques to study cell divisions in the nematode, from the fertilized egg to the 959 cells in the adult organism
SYDNEY BRENNER, H. Robert Howvitz and John . E. Sulston have been jointly awarded the Nobel prize in physiology or medicine. The prize has been awarded by the Nobel assembly at karolinksa Institute for 2002 for their discoveries concerning genetic regulation of organ development and programmed cell death.
The human body consists of hundreds of cell types, all originating from the fertilized egg. During the embryonic and foetal periods, the number of cells increase dramatically. The cells mature and become specialised to form the various tissues and organs of the body. Large numbers of cells are formed also in the adult body. In parallel with this generation of new cells, cell death is a normal process, both in the foetus and adult, to maintain the appropriate number of cells in the tissues. This delicate, controlled elimination of cells is called programmed cell death.
Developmental biologists first described programmed cell death. They noted that cell death was necessary for embryonic development, for example when tadpoles undergo metamorphosis to become adult frogs. In the human foetus, the interdigital mesoderm initially formed between fingers and toes is removed by programmed cell death. The vast excess of neuronal cells present during the early stages of brain development is also eliminated by the same mechanism.

H. Robert Horvitz used C. elegans to investigate whether there was as a genetic program controlling cell death.
Sydney Brenner realized, in the early 1960's, that fundamental questions regarding cell differentiation and organ development were hard to tackle in higher animals. Therefore, a genetically amenable and multicellular model organism simpler than mammals, was required. The ideal solution proved to be the nematodeCaenorhabditis elegans. This worm, approximately 1 mm long, has a short generation time and is transparent, which made it possible to follow cell division directly under the microscope. Brenner provided the basis in a publication from 1974, in which he broke new ground by demonstrating that specific gene mutations could be induced in the genome of C. elegans by the chemical compound EMS (ethyl methane sulphonate).
Different mutations could be linked to specific genes and to specific effects on organ development. Detailed studies in this simple model organism demonstrated that 131 of totally 1090 cells die reproducibly during development, and that this natural cell death is controlled by a unique set of genes.
John Sulston extended Brenner's work with C. elegans and developed techniques to study all cell divisions in the nematode, from the fertilized egg to the 959 cells in the adult organism. In a publication from 1976, Sulston described the cell lineage for a part of the developing nervous system. He showed that the cell lineage is invariant, i.e. every nematode underwent exactly the same program of cell division and differentiation. As a result of these findings Sulston made the seminal discovery that specific cells in the cell lineage always die through programmed cell death and that this could be monitored in the living organism. He described the visible steps in the cellular death process and demonstrated the first mutations of genes participating in programmed cell death, including the nuc-1 gene.
Sulston also showed that the protein encoded by the nuc-1 gene is required for degradation of the DNA of the dead cell. Robert Horvitz continued Brenner's and Sulston's work on the genetics and cell lineage of C. elegans. In a series of elegant experiments that started during the 1970's, Horvitz used C. elegans to investigate whether there was a genetic program controlling cell death. In a pioneering publication from 1986, he identified the first two bona fide "death genes", ced-3 and ced-4. He showed that functional ced-3 and ced-4 genes were a prerequisite for cell death to be executed. Later, Horvitz showed that another gene, ced-9, protects against cell death by interacting with ced-4 and ced-3. He also identified a number of genes that direct how the dead cell is eliminated. Horvitz showed that the human genome contains a ced-3-like gene. We now know that most genes that are involved in controlling cell death in C. elegans, have counterparts in humans.

Salk Institute professor Sydney Brenner demonstrated that specific gene mutations could be induced in genome of C. elegans by the chemical compound EMS (ethyl methane sulphonate).
Knowledge of programmed cell death has helped us to understand the mechanisms by which some viruses and bacteria invade our cells. We also know that in AIDS, neurodegenerative diseases, stroke and myocardial infarction, cells are lost as a result of excessive cell death. Other diseases, like autoimmune conditions and cancer, are characterized by a reduction in cell death, leading to the survival of cells normally destined to die.
Research on programmed cell death is intense, including in the field of cancer. Many treatment strategies are based on stimulation of the cellular "suicide program". This is, for the future, a most interesting and challenging task to further explore in order to reach a more refined manner to induce cell death in cancer cells.
This year's Nobel Laureates in Physiology or Medicine have made seminal discoveries concerning the genetic regulation of organ development and programmed cell death. By establishing and using the nematode Caenorhabditis elegans as an experimental model system, possibilities were opened to follow cell division and differentiation from the fertilized egg to the adult.
The Laureates have identified key genes regulating organ development and programmed cell death and have shown that corresponding genes exist in higher species, including man. The discoveries are important for medical research and have shed new light on the pathogenesis of many diseases.

Tuesday, February 28, 2012

Paper..How it is made..Wikipaedia.

paper mill is a factory devoted to making paper from vegetable fibres such as wood pulp, old rags and other ingredients using a Fourdrinier machine or other type of paper machine.



Stromer's paper mill, the building complex at the far right bottom, in the Nuremberg Chronicle of 1493. Due to their noise and smell, papermills were required by medieval law to be erected some distance from the city walls.
mid Nineteenth century paper mill, the Forest Fibre Company, in Berlin, New Hampshire.
Basement of paper mill in Sault Ste. Marie, Ontario. Pulp and paper manufacture involves a great deal of humidity, which presents a preventive maintenance andcorrosion challenge.
The Diamond Sutra of the Chinese Tang Dynasty, the oldest dated printed book in the world, found at Dunhuang, from 868 CE.
Papermaking is the process of making paper, a substance which is used universally today for writing and packaging.
In papermaking a dilute suspension of fibres in water is drained through a screen, so that a mat of randomly interwoven fibres is laid down. Water is removed from this mat of fibres by pressing and drying to make paper. Since the invention of the Fourdrinier machine in the 19th century, most paper has been made from wood pulp because of cost. But other fibre sources such as cotton and textiles are used for high-quality papers. One common measure of a paper's quality is its non-woodpulp content, e.g., 25% cotton, 50% rag, etc.
Papermaking, regardless of the scale on which it is done, involves making a dilute suspension of fibres in water and allowing this suspension to drain through a screen so that a mat of randomly interwoven fibres is laid down. Water is removed from this mat of fibres by pressing and drying to make paper.
An illustration from 105 AD depicting the papermaking process as designed by Cai Lun.
First the fibres are suspended in water to form a slurry in a large vat. The mold is a wire screen in a wooden frame (somewhat similar to an old window screen), which is used to scoop some of the slurry out of the vat. The slurry in the screen mold is sloshed around the mold until it forms a uniform thin coating. The fibres are allowed to settle and the water to drain. When the fibres have stabilized in place but are still damp, they are turned out onto a felt sheet which was generally made of an animal product such as wool or rabbit fur, and the screen mold immediately reused. Layers of paper and felt build up in a pile (called a 'post') then a weight is placed on top to press out excess water and keep the paper fibres flat and tight. The sheets are then removed from the post and hung or laid out to dry. A step-by-step procedure for making paper with readily available materials can be found online.[13]
When the paper pages are dry, they are frequently run between rollers (calendered) to produce a harder writing surface. Papers may be sized with gelatin or similar to bind the fibres into the sheet. Papers can be made with different surfaces depending on their intended purpose. Paper intended for printing or writing with ink is fairly hard, while paper to be used for water color, for instance, is heavily sized, and can be fairly soft.
The wooden frame is called a "deckle". The deckle leaves the edges of the paper slightly irregular and wavy, called "deckle edges", one of the indications that the paper was made by hand. Deckle-edged paper is occasionally mechanically imitated today to create the impression of old-fashioned luxury. The impressions in paper caused by the wires in the screen that run sideways are called "laid lines" and the impressions made, usually from top to bottom, by the wires holding the sideways wires together are called "chain lines". Watermarks are created by weaving a design into the wires in the mold. This is essentially true of Oriental molds made of other substances, such as bamboo. Hand-made paper generally folds and tears more evenly along the laid lines.
Hand-made paper is also prepared in laboratories to study papermaking and to check in paper mills the quality of the production process. The "handsheets" made according to TAPPI Standard T 205[14] are circular sheets 15.9 cm (6.25 in) in diameter and are tested on paper characteristics as paper brightness, strength, degree of sizing.[15]

Paper banknotes

Most banknotes are made from cotton paper (see also paper) with a weight of 80 to 90 grams per square meter. The cotton is sometimes mixed with linen,abaca, or other textile fibres. Generally, the paper used is different from ordinary paper: it is much more resilient, resists wear and tear (the average life of a banknote is two years),[16] and also does not contain the usual agents that make ordinary paper glow slightly under ultraviolet light. Unlike most printing and writing paper, banknote paper is infused with polyvinyl alcohol or gelatin to give it extra strength. Early Chinese banknotes were printed on paper made ofmulberry bark and this fiber is used in Japanese banknote paper today.
Most banknotes are made using the mould made process in which a watermark and thread is incorporated during the paper forming process. The thread is a simple looking security component found in most banknotes. It is however often rather complex in construction comprising fluorescent, magnetic, metallic and micro print elements. By combining it with watermarking technology the thread can be made to surface periodically on one side only. This is known as windowed thread and further increases the counterfeit resistance of the banknote paper. This process was invented by Portals, part of the De La Rue group in the UK. Other related methods include watermarking to reduce the number of corner folds by strengthening this part of the note, coatings to reduce the accumulation of dirt on the note, and plastic windows in the paper that make it very hard to copy.

2004..Nobel Prize for medicine and physiology..MRI


Paul C. Lauterbur
Sir Peter Mansfield

Paul C. Lauterbur

Sir Peter Mansfield

The Nobel Prize in Physiology or Medicine 2003 was awarded jointly to Paul C. Lauterbur and Sir Peter Mansfield "for their discoveries concerning magnetic resonance imaging"
      

Summary

Imaging of human internal organs with exact and non-invasive methods is very important for medical diagnosis, treatment and follow-up. This year's Nobel Laureates in Physiology or Medicine have made seminal discoveries concerning the use of magnetic resonance to visualize different structures. These discoveries have led to the development of modern magnetic resonance imaging, MRI, which represents a breakthrough in medical diagnostics and research.
Atomic nuclei in a strong magnetic field rotate with a frequency that is dependent on the strength of the magnetic field. Their energy can be increased if they absorb radio waves with the same frequency (resonance). When the atomic nuclei return to their previous energy level, radio waves are emitted. These discoveries were awarded the Nobel Prize in Physics in 1952. During the following decades, magnetic resonance was used mainly for studies of the chemical structure of substances. In the beginning of the 1970s, this year’s Nobel Laureates made pioneering contributions, which later led to the applications of magnetic resonance in medical imaging.
Paul Lauterbur (born 1929), Urbana, Illinois, USA, discovered the possibility to create a two-dimensional picture by introducing gradients in the magnetic field. By analysis of the characteristics of the emitted radio waves, he could determine their origin. This made it possible to build up two-dimensional pictures of structures that could not be visualized with other methods.
Peter Mansfield (born 1933), Nottingham, England, further developed the utilization of gradients in the magnetic field. He showed how the signals could be mathematically analysed, which made it possible to develop a useful imaging technique. Mansfield also showed how extremely fast imaging could be achievable. This became technically possible within medicine a decade later.
MRI
MRI is used for imaging of all organs in the body.
Magnetic resonance imaging, MRI, is now a routine method within medical diagnostics. Worldwide, more than 60 million investigations with MRI are performed each year, and the method is still in rapid development. MRI is often superior to other imaging techniques and has significantly improved diagnostics in many diseases. MRI has replaced several invasive modes of examination and thereby reduced the risk and discomfort for many patients.

Nuclei of hydrogen atoms

Water constitutes about two thirds of the human body weight, and this high water content explains why magnetic resonance imaging has become widely applicable to medicine. There are differences in water content among tissues and organs. In many diseases the pathological process results in changes of the water content, and this is reflected in the MR image.
Water is a molecule composed of hydrogen and oxygen atoms. The nuclei of the hydrogen atoms are able to act as microscopic compass needles. When the body is exposed to a strong magnetic field, the nuclei of the hydrogen atoms are directed into order – stand "at attention". When submitted to pulses of radio waves, the energy content of the nuclei changes. After the pulse, a resonance wave is emitted when the nuclei return to their previous state.
The small differences in the oscillations of the nuclei are detected. By advanced computer processing, it is possible to build up a three-dimensional image that reflects the chemical structure of the tissue, including differences in the water content and in movements of the water molecules. This results in a very detailed image of tissues and organs in the investigated area of the body. In this manner, pathological changes can be documented.

Rapid development within medicine

The medical use of magnetic resonance imaging has developed rapidly. The first MRI equipments in health were available at the beginning of the 1980s. In 2002, approximately 22 000 MRI cameras were in use worldwide, and more than 60 million MRI examinations were performed.
A great advantage with MRI is that it is harmless according to all present knowledge. The method does not use ionizing radiation, in contrast to ordinary X-ray (Nobel Prize in Physics in 1901) or computer tomography (Nobel Prize in Physiology or Medicine in 1979) examinations. However, patients with magnetic metal in the body or a pacemaker cannot be examined with MRI due to the strong magnetic field, and patients with claustrophobia may have difficulties undergoing MRI.

Especially valuable for examination of the brain and the spinal cord

Today, MRI is used to examine almost all organs of the body. The technique is especially valuable for detailed imaging of the brain and the spinal cord. Nearly all brain disorders lead to alterations in water content, which is reflected in the MRI picture. A difference in water content of less than a percent is enough to detect a pathological change.
In multiple sclerosis, examination with MRI is superior for diagnosis and follow-up of the disease. The symptoms associated with multiple sclerosis are caused by local inflammation in the brain and the spinal cord. With MRI, it is possible to see where in the nervous system the inflammation is localized, how intense it is, and also how it is influenced by treatment.
Imaging of the brain and the spinal cord
Examination with MRI is especially valuable for detailed imaging of the brain and the spinal cord.
Another example is prolonged lower back pain, leading to great suffering for the patient and to high costs for the society. It is important to be able to differentiate between muscle pain and pain caused by pressure on a nerve or the spinal cord. MRI examinations have been able to replace previous methods which were unpleasant for the patient. With MRI, it is possible to see if a disc herniation is pressing on a nerve and to determine if an operation is necessary.

Important preoperative tool

Since MRI yields detailed three-dimensional images, it is possible to get distinct information on where a lesion is localized. Such information is valuable before surgery. For instance, in certain microsurgical brain operations, the surgeon can operate with guidance from the MRI results. The images are detailed enough to allow placement of electrodes in central brain nuclei in order to treat severe pain or to treat movement disorders in Parkinson's disease.

Improved diagnostics in cancer

MRI examinations are very important in diagnosis, treatment and follow-up of cancer. The images can exactly reveal the limits of a tumour, which contributes to more precise surgery and radiation therapy. Before surgery, it is important to know whether the tumour has infiltrated the surrounding tissue. MRI can more exactly than other methods differentiate between tissues and thereby contribute to improved surgery.
MRI has also improved the possibilities to ascertain the stage of a tumour, and this is important for the choice of treatment. For example, MRI can determine how deep in the tissue a colon cancer has infiltrated and whether regional lymph nodes have been affected.

Reduced suffering for patients

MRI can replace previously used invasive examinations and thereby reduce the suffering for many patients. One example is investigation of the pancreatic and bile ducts with contrast media injection via an endoscope. This can in some cases lead to serious complications. Today, corresponding information can be obtained by MRI.
Diagnostic arthroscopy (examination with an optic instrument inserted into the joint) can be replaced by MRI. In the knee, it is possible to perform detailed MRI studies of the joint cartilage and the cruciate ligaments. Since no invasive instrument is needed in MRI, the risk of infection is eliminated.

Monday, February 27, 2012

Time line of inventions..Magnetic compass.From Wikipaedia


A compass is a navigational instrument that measures directions in a frame of reference that is stationary relative to the surface of the earth. The frame of reference defines the four cardinal directions (or points) – northsoutheast, and west. Intermediate directions are also defined. Usually, a diagram called acompass rose, which shows the directions (with their names usually abbreviated to initials), is marked on the compass. When the compass is in use, the rose is aligned with the real directions in the frame of reference, so, for example, the "N" mark on the rose really points to the north. Frequently, in addition to the rose or sometimes instead of it, angle markings in degrees are shown on the compass. North corresponds to zero degrees, and the angles increase clockwise, so east is 90 degrees, south is 180, and west is 270. These numbers allow the compass to show azimuths or bearings, which are commonly stated in this notation.
There are two widely used and radically different types of compass. The magnetic compass contains a magnet that interacts with the earth's magnetic fieldand aligns itself to point to the magnetic poles. The gyro compass (sometimes spelled with a hyphen, or as one word) contains a rapidly spinning wheel whose rotation interacts dynamically with the rotation of the earth so as to make the wheel precess, losing energy to friction until its axis of rotation is parallel with the earth's.
The magnetic compass was invented during the Chinese Han Dynasty between the 2nd century BC and 1st century AD,[1] and was used for navigation by the 11th century.[2] The compass was introduced to medieval Europe 150 years later,[2] where the dry compass was invented around 1300.[3] This was supplanted in the early 20th century by the liquid-filled magnetic compass.[4]
A simple dry magnetic poc
ket compass

The Nobel Prize in Physiology or Medicine 2005..discovery of the bacterium Helicobacter pylori and its role in gastritis and peptic ulcer disease"


Barry J. Marshall
J. Robin Warren

Barry J. Marshall

J. Robin Warren

The Nobel Prize in Physiology or Medicine 2005 was awarded jointly to Barry J. Marshall and J. Robin Warren "for their discovery of the bacterium Helicobacter pylori and its role in gastritis and peptic ulcer disease"

Summary

This year's Nobel Laureates in Physiology or Medicine made the remarkable and unexpected discovery that inflammation in the stomach (gastritis) as well as ulceration of the stomach or duodenum (peptic ulcer disease) is the result of an infection of the stomach caused by the bacterium Helicobacter pylori.
Robin Warren (born 1937), a pathologist from Perth, Australia, observed small curved bacteria colonizing the lower part of the stomach (antrum) in about 50% of patients from which biopsies had been taken. He made the crucial observation that signs of inflammation were always present in the gastric mucosa close to where the bacteria were seen.
Barry Marshall (born 1951), a young clinical fellow, became interested in Warren's findings and together they initiated a study of biopsies from 100 patients. After several attempts, Marshall succeeded in cultivating a hitherto unknown bacterial species (later denoted Helicobacter pylori) from several of these biopsies. Together they found that the organism was present in almost all patients with gastric inflammation, duodenal ulcer or gastric ulcer. Based on these results, they proposed that Helicobacter pylori is involved in the aetiology of these diseases.
Even though peptic ulcers could be healed by inhibiting gastric acid production, they frequently relapsed, since bacteria and chronic inflammation of the stomach remained. In treatment studies, Marshall and Warren as well as others showed that patients could be cured from their peptic ulcer disease only when the bacteria were eradicated from the stomach. Thanks to the pioneering discovery by Marshall and Warren, peptic ulcer disease is no longer a chronic, frequently disabling condition, but a disease that can be cured by a short regimen of antibiotics and acid secretion inhibitors.

Peptic ulcer – an infectious disease!

This year's Nobel Prize in Physiology or Medicine goes to Barry Marshall and Robin Warren, who with tenacity and a prepared mind challenged prevailing dogmas. By using technologies generally available (fibre endoscopy, silver staining of histological sections and culture techniques for microaerophilic bacteria), they made an irrefutable case that the bacteriumHelicobacter pylori is causing disease. By culturing the bacteria they made them amenable to scientific study.
In 1982, when this bacterium was discovered by Marshall and Warren, stress and lifestyle were considered the major causes of peptic ulcer disease. It is now firmly established thatHelicobacter pylori causes more than 90% of duodenal ulcers and up to 80% of gastric ulcers. The link between Helicobacter pylori infection and subsequent gastritis and peptic ulcer disease has been established through studies of human volunteers, antibiotic treatment studies and epidemiological studies.

Helicobacter pylori causes life-long infection

Helicobacter pylori is a spiral-shaped Gram-negative bacterium that colonizes the stomach in about 50% of all humans. In countries with high socio-economic standards infection is considerably less common than in developing countries where virtually everyone may be infected.
Infection is typically contracted in early childhood, frequently by transmission from mother to child, and the bacteria may remain in the stomach for the rest of the person's life. This chronic infection is initiated in the lower part of the stomach (antrum). As first reported by Robin Warren, the presence of Helicobacter pylori is always associated with an inflammation of the underlying gastric mucosa as evidenced by an infiltration of inflammatory cells.

The infection is usually asymptomatic but can cause peptic ulcer

The severity of this inflammation and its location in the stomach is of crucial importance for the diseases that can result from Helicobacter pylori infection. In most individuals Helicobacter pylori infection is asymptomatic. However, about 10-15% of infected individuals will some time experience peptic ulcer disease. Such ulcers are more common in the duodenum than in the stomach itself. Severe complications include bleeding and perforation.
The current view is that the chronic inflammation in the distal part of the stomach caused byHelicobacter pylori infection results in an increased acid production from the non-infected upper corpus region of the stomach. This will predispose for ulcer development in the more vulnerable duodenum.

Malignancies associated with Helicobacter pylori infection

In some individuals Helicobacter pylori also infects the corpus region of the stomach. This results in a more widespread inflammation that predisposes not only to ulcer in the corpus region, but also to stomach cancer. This cancer has decreased in incidence in many countries during the last half-century but still ranks as number two in the world in terms of cancer deaths.
Inflammation in the stomach mucosa is also a risk factor for a special type of lymphatic neoplasm in the stomach, MALT (mucosa associated lymphoid tissue) lymphoma. Since such lymphomas may regress when Helicobacter pylori is eradicated by antibiotics, the bacterium plays an important role in perpetuating this tumour.

Disease or not – interaction between the bacterium and the human host

Helicobacter pylori is present only in humans and has adapted to the stomach environment. Only a minority of infected individuals develop stomach disease. After Marshall's and Warren's discovery, research has been intense. Details underlying the exact pathogenetic mechanisms are continuously being unravelled.
The bacterium itself is extremely variable, and strains differ markedly in many aspects, such as adherence to the gastric mucosa and ability to provoke inflammation. Even in a single infected individual all bacteria are not identical, and during the course of chronic infection bacteria adapt to the changing conditions in the stomach with time.
Likewise, genetic variations among humans may affect their susceptibility to Helicobacter pylori. Not until recently has an animal model been established, the Mongolian gerbil. In this animal, studies of peptic ulcer disease and malignant transformation promise to give more detailed information on disease mechanisms.

Antibiotics cure but can lead to resistance

Helicobacter pylori infection can be diagnosed by antibody tests, by identifying the organism in biopsies taken during endoscopy, or by the non-invasive breath test that identifies bacterial production of an enzyme in the stomach.
An indiscriminate use of antibiotics to eradicate Helicobacter pylori also from healthy carriers would lead to severe problems with bacterial resistance against these important drugs. Therefore, treatment against Helicobacter pylori should be used restrictively in patients without documented gastric or duodenal ulcer disease.

Microbial origin of other chronic inflammatory conditions?

Many diseases in humans such as Crohn's disease, ulcerative colitis, rheumatoid arthritis and atherosclerosis are due to chronic inflammation. The discovery that one of the most common diseases of mankind, peptic ulcer disease, has a microbial cause, has stimulated the search for microbes as possible causes of other chronic inflammatory conditions.
Even though no definite answers are at hand, recent data clearly suggest that a dysfunction in the recognition of microbial products by the human immune system can result in disease development. The discovery of Helicobacter pylori has led to an increased understanding of the connection between chronic infection, inflammation and cancer.
Helicobacter pylori