1. The LaTex philosophy

                   
                     

LaTex is a document preparation system for high-quality typesetting. It is not a word processor; it is not a WYSIWYG system. The LATEX philosophy is defined by ‘What You Get Is What You Mean (WYGWYM’). This encapsulates three concepts:

1. Presentation (what you get): is the user’s content in an exact reproduction of her mental map on how the document should look like;
2. Style (what you mean): is the user’s mental map of how to present her content in the most meaningful and eye-catching manner;
3. A compiler to convert the mental map into actionable machine instructions which will produce consistent results across platforms.

LaTex has developed over time with conceptual foundations in the separation of content and style (or presentation). It is, therefore, clubbed in the family of Markup Languages. A LaTex document (source file) is therefore divided into two parts. The top part [preamble] contains all the formatting instructions, hard coded and applicable to the whole document. The bottom part contains the content. Each element/macro definition in the preamble is a specific instruction to the compiler on how to build the content. This is where the magic happens. LATEX is both a language framework to make mark-up notes on the text, and a software engine to interpret the macro instructions.

What is a Markup language?
A markup language is simply a set of annotations which instruct the compiler to deal with the document or parts of it. Markup annotations are designed to make the underlying text content readable, meaningful and visually pleasing to the reader. Wikipedia defines a markup language “as a set of rules governing what markup information may be included in a document and how it is combined with the content of the document in a way to facilitate use by humans and computer programs. “ The idea here is to have a clear separation between the content of the matter and the style of presentation.
Joseph Wright, member of the Latex3 project, has an excellent recap of history of the TeX thought process .

Contemporary audiences may reflect on the HTML+CSS bundle. Most websites are now built using HTML for structural mark-up, and CSS for presentation of that markup. For illustration, HTML uses tags such as < title >, < h1 >, < p > and < br > around segments of text to indicate the block is a title, a top level heading, a paragraph or a line break, respectively. These tags stay immutable within the document. However, design changes can be implemented by changing parameters in the associated CSS document. You don’t have to touch the main content again. This makes it really easy to apply different styles over the same content:

To be sure, LATEX has a steep learning curve. It is often cumbersome. Unless the size of the document--both in terms of pages of text content and graphical content--justifies the effort, most users will argue on behalf of WYSIWYG word processors. The balance tilts in favour of Latex in the case of large and complex documents, where it is imperative to maintain one style sheet across chapters and multiple pages. Once the style intent is hard coded in the preamble, the rest of the text stream is just entered as plain text.

A bit of a backstory here, The Tex Users’ Group< (TUG) documents that in the late 1970s, Donald E. Knuth, mathematician and computer scientist at Stanford University, was revising the second volume of his multivolume tome The Art of Computer Programming. A series of unfortunate realizations later, Knuth decided to produce his own books.
“As a mathematician/computer scientist, he developed an input language that makes sense to other scientists, and for math expressions, is quite similar to how one mathematician would recite a string of notation to another on the telephone."

Source: Tex Users Group, Just what is Tex? (ibid)

In 1978, Knuth published ‘The Tex Book’, with two main objectives in mind: 1. to allow anybody to produce high-quality books with minimal effort, and 2. to provide a system that would give exactly the same results on all computers. The short point behind this digression is to point out that Knuth did not think much of separation between content and presentation at the time of writing his tome. It was not until the mid-eighties that Leslie Lamport put the idea on a more conceptual foundation with the next generation, LaTex. For a compact history of evolution of the concept, please see Joseph Wright’s answer.

Leslie Lamport created LaTex in 1983 when he needed to write TeX macros for his own use. In the LaTeX User’s Manual, published in 1986, Lamport argued for a clear separation between content and style. The World Wide Web Technical Architecture Group (W3C-TAG), tasked with documenting and building connsensus around principles of web architecture (in the context of HTML development), reported a draft finding in 2003:

“Separating the concepts content, presentation, and interaction allows more easily composable specifications. For example, a markup language can be specified independently of a style sheet language. The separation facilitates alternate presentations of the same content, which is seen to have an accessibility advantage and to be more suited to the multiple modalities of Web access."

Source: Separation of semantic and presentational markup, to the extent possible, is architecturally sound

Draft TAG finding, 30 June 2003