Comments on the Decline of Logic Education
I recently asked a number of logicians, historians of logic, and logic enthusiasts the following question:
“When exactly and why did logic stop being a core requirement for every single educated person in the west and become seen as a technical, niche elective that only a tiny fraction of educated people know anything about?”
I received 2 responses in the Ontolog Forum:
John Sowa said: "I blame Bertrand Russell. He wanted schools to stop teaching traditional logic and replace it with symbolic logic. He got 50% of what he asked for."
Chris Menzel, a logic professor at Texas A&M, said: "I attribute this far more to the utter havoc wrought upon higher education by conservative, specifically brainless Republican, politicians. They've managed to transmogrify our once glorious system of state universities to a collection of education 'dealerships' whose purpose is to provide a 'service' to their 'clients' that guarantees them a high paying job in business or industry. The on-going gutting of the liberal arts has been a sad consequence of this."
There were numerous responses at Academia.edu:
John Corcoran said: "I have never given this any thought, but it is an interesting question. One thing to bear in mind is that over the years logic got competition from 'critical thinking' and kindred subjects."
The discussion below is from John Corcoran's session titled CORCORAN ON LOGIC TEACHING IN THE 21ST CENTURY at Academia.edu:
OLAP is NOT Becoming Obsolete
OLAP cubes are among the most powerful resources available for business intelligence and analytics. Here's what the product manager for Microsoft Analysis Services said about OLAP back in 2002:
OLAP multidimensional databases combine incredible performance with unsurpassed analytical power and, in my opinion, are the foundation of the BI platform.
The multidimensional data model is vastly superior to the relational data model when it comes to the expressiveness of analytical operations. The ability to have random access to any point in space, both detailed data and aggregates, makes it a breeze to express calculations that would otherwise take pages of SQL statements using a relational database.
This remains true about OLAP today, and it is likely to remain true for a long time. While technologies change, the underlying concepts remain the same. But OLAP seems to be falling out of favor recently. Why would that be?
The top Google search result for OLAP claims "OLAP Is Becoming Obsolete". This is actually a paid advertisement that invites users to download a paper titled Selecting the Right Database Technology for Your Business Analytics Project. Interestingly, it says very little about OLAP and does not even use the word obsolete anywhere in the paper. And the only negative thing it says about OLAP is the single sentence below:
Storing the results of these pre-calculations takes exponentially more storage resources than the actual raw data does, limiting the size of raw data that can make up a cube to gigabyte scale.
But that is actually false, because the most popular storage format for OLAP cubes are multidimensional structures which require far less storage than the original source data. Production OLAP cubes have exceeded more than 20 terabytes of raw data, and their continued growth is limited only by computing power, memory and storage.
In the real world, there is no reason to believe that OLAP is becoming obsolete. But there are more signs that people think it is, notwithstanding the facts. The statement below is from the Microsoft website more recently:
For new projects, we generally recommend tabular models. (rather than OLAP cubes)
Microsoft has been a leader in promoting OLAP, so why are they now downplaying it and steering people towards tabular models? The reason they give is that "tabular models are faster to design, test, and deploy..." But that is true only when the requirements and data are very simple. When the requirements or the data become more complex, even a little bit, the complexity of developing a tabular model explodes and quickly becomes far, far more complex than a multidimensional OLAP project with exactly the same requirements and data. And the end products are less flexible and far less able to adapt to changing requirements.
So what gives? Why would Microsoft make a claim that relies on an implausible assumption of simplicity which does not exist in most enterprise environments? And why would another company claim that OLAP is becoming obsolete in a paid advertisement with only a single dubious claim to back it up?
Here's what I think is going on: OLAP is most valuable when the underlying source data is clean and well-integrated. And clean, well-integrated data is difficult to achieve for many organizations. Claiming that OLAP is obsolete is a marketing ploy to promote products that work well with poor-quality data and data that is not well integrated.
There is a legitimate need for such products, and they have a huge market potential – but vendors should make that case and sell those products without claiming that OLAP is becoming obsolete because it is not. For organizations with clean, well-integrated data OLAP is far and away the best choice for business intelligence and analytical applications.
A False Hope for Big Data
Big data can provide powerful insights into large data sets. Some scholars and practitioners have even suggested that big data tools and techniques might replace relational databases for ordinary business use, but this claim offers only false hope for organizations that struggle with relational databases. Here's why:
Big data systems work with information organized into small 2-part chunks known as key-value pairs (or some related format). For example: Last Name = Fuller; City = Redmond; Car = Honda Accord; Order status = complete.
Organizing information this way is great for things like analyzing trends and detecting patterns. But big data formats cannot be used for ordinary business reporting unless each record is tagged with additional information to tell which other records it is related to. For example: this address belongs to that person; this item goes with that order, and so forth. Applying these kinds of tags to information in a big data format requires exactly the same kind of discipline and pre-planning as it would if it were organized for a relational database. Big data offers nothing new in this regard.
Even when a big data record set includes complete information about the relationships between each pair, big data technologies do not offer anywhere near the flexibility of relational databases for reporting purposes. So any claim that big data presents a plausible alternative to relational databases for general business use is uninformed and false.
Cost-Benefit Value is Being Ignored
The idea that managing information involves cost-benefit tradeoffs is not new, but unfortunately those decisions usually focus only on engineering and implementation issues such as performance and storage, while business utility is totally ignored. By business utility I mean the capacity of an information resource to meet expressed and unexpressed needs and adapt to changing requirements.
To understand why, lets look back to the early days of modern database software. The statements below were made by Don Chamberlin who was one of the co-designers of SQL, the world's most widely-used database query language. Here he is describing decisions made in the mid-1970s which his team intended to give users more flexibility with cost-benefit tradeoffs:
“When the original SQL designers decided to allow users the options of handling nulls and duplicates, they viewed these features as minor conveniences, not as major departures from orthodoxy, taken at the risk of excommunication."
"SQL trusts the database designer to decide whether the costs ... are justified. To impose these costs on all applications ... seems a little heavy-handed, and seemed even more so in 1975 given the costs of storage and processing at that time."
Even today (in 2015) Dr. Chamberlin and his former colleagues continue to bear harsh criticism for those decisions. This is completely unfair because they did not force anyone to do or ignore anything. Rather, they gave users the freedom to make their own cost-benefit decisions.
But unfortunately those decisions today, and others of equal importance, do not fall to the business stakeholders who pay the price for poor choices or, who are in the best position make good ones. Instead they are decided behind the scenes by technical professionals, while information owners remain unaware that such tradeoff opportunities even exist. The result has become that information systems are too expensive, do not adapt well to changing requirements, and do not meet the expectations of their owners.
Information owners do not need to become technical experts to make good decisions about organizing information, and it is absolutely essential that business experts, rather than architects or engineers, have final decision authority. The role of IT departments in these matters should be strictly limited to cost-benefit advisement.
Even when technical professionals carefully consider cost-benefit tradeoffs they are constrained by a technical perspective. The following statement is from a paper titled Justifying database normalization: a cost/benefit model:
"the determination of appropriate normal forms frustrates many systems analysts ... a trade-off exists among system performance, storage, and costs."
But the author omits a critical factor: the primary overall consideration in any cost-benefit equation should always be business utility. Technology professionals cannot be expected to recognize the full range of implications for business utility in any given tradeoff scenario. They must instead rely on a set of requirements specifically spelled-out in advance by the business customer. That is why scope creep is such a huge problem in most IT projects, and why most information systems cannot adapt well to changing requirements.
It would probably be impossible for a business owner to make a complete list of every possible use-case for a given resource. But a business owner can easily determine whether a resource can satisfy their foreseeable needs even before any requirement has been expressed. What an owner considers to be foreseeable can change over time, but it will always change more slowly than the perceived requirements when technology professionals have to guess or fill-in the blanks. New technologies can expand the range of use cases for information, but for information to be useful it must still be organized in a way that allows the desired use – technology cannot change that.
For related discussion see:
The Legacy of the Systems Men
Dr. Thomas Haigh tells a fascinating story in The Business History Review that explains why information management is seen as a technical discipline rather than a management one: The systems men were members of the Systems and Procedures Association during the 1950s and 60s. The purpose of this association was not to promote research or continuing education, but rather to seek increased status and management authority for its members within their employing organizations. They offered an implicit bargain to corporate executives:
"You put us in charge and we’ll deliver to you more power over your firms than you’ve ever dreamed of" (p 43)
Executives for the most part were not convinced. They understood that technical expertise does not translate into management ability. By the 1970’s the Systems and Procedures Association was defunct, and the various roles of the systems men merged into corporate IT departments. But they left a stubborn cultural legacy that persists still today, the idea that managing information is a job for architects and engineers rather than business experts:
"For better or worse, to speak of something as an information system continues to imply that it should be engineered by an information specialist and built using information technology" (p 59)
"It seems unlikely that the idea of information can ever truly be separated from these roots: it is just too historically and culturally charged" (p 59)
The full article is available here, and a free version here.
3 Ways to Lower Costs and Improve Outcomes in any BI or Analytics Project
Here are 3 ways to lower costs and improve outcomes in any BI or analytics project, or for that matter any information management effort. They require varying degrees of commitment ranging from easy, free, and doable right now, to a need for significant change in organization and culture:
- Have each information owner (meaning the business person who decides the requirements for an information resource) actually look at the proposed way the information will be organized; in other words, the actual tables and relationships with sample data (very important). Some might think this sounds like asking them to look at programming code – Not so! Programming code is purely technical in nature. Deciding how information is organized is purely business in nature; the only input that should be needed from IT is cost-benefit advisement on issues like performance. The way information is organized determines how it can be used. The operational capabilities of any organization are literally determined by the way its information is organized. A business owner cannot possibly make a complete list of every possible use case for an information resource, but when they look directly at a proposed format and see how it is organized they can easily determine whether that resource will meet their foreseeable needs even before they try to express any requirement. Of course, what an information owner considers foreseeable can change over time, but it will always change more slowly than the constantly changing list that must be maintained when an architect or engineer is in charge of determining requirments from their own perspective. The time a business expert spends doing this will be returned many times over. It will reduce scope creep, save development cycles and produce better outcomes every time, guaranteed.
- Assign an information manager to every business unit. Information managers can be drawn from the same talent pool as business analysts. They are business-oriented professionals often trained at university business schools to manage information and determine how it should be organized. There is no reason these people should work in any IT department except as information managers for business units within IT. Properly-placed information managers will eliminate the need for business analysts and will be 3x to 10x more effective and productive. Information managers should report to the same office as business managers, usually a GM or VP. An information manager should be responsible to decide how information produced by that business unit will be organized according to the priorities and requirements of the business. When information needs to be organized across multiple systems and business units, information managers should coordinate with their cross-department peers and respond to policies set by senior information managers who report directly to the COO or CFO. Information managers for large business units may require a staff, as will those for some smaller organizations depending on the rate-of-change and complexity of the information they manage.
- Have everyone in your organization take a course in fundamental logic. This is not too much to ask – logic was once a core focus of classical education. In fact it was one of the main reasons universities were invented in the first place. Logic remained a central pillar of university curricula for hundreds of years until around the 1940's or so, but since then has been severely de-emphasized at great detriment to the discipline of information management. Today a person can earn a PhD in nearly any subject including business administration or computer science without taking a single introductory course in formal logic. Almost any person at any level of a modern organization can create new information resources, so logic education and logic-aware management are absolutely essential for any organization that wants to build an effective culture and capacity to manage information.
Normalization is a Dubious Concept
Database normalization is a dubious concept for 2 reasons:
- The cost-benefit tradeoffs in choosing a normal form can often be determined only by an information owner, not by a database expert, and
- The normal form of many tables can be determined only by a subject matter expert, rather than a database expert
As an example of reason 1, the table below contains an error which you can see in the last row: an address in Grand Junction, Tennessee has the same zip code as the address above it in Grand Junction, Colorado. This is clearly a mistake, but without an understanding of zip code assignments we do not know whether the error here is with the State or the zip.
Moving City, State and ZIP into a pre-defined zip code table, as shown here, eliminates any potential for this kind of inconsistency; however the advantage might come at a cost because performance can suffer when applications and reports have to query two joined tables instead of one.
So what is more important, better performance or lower risk of error? Who should decide? Only information owners have the appropriate perspective and incentive to make that kind of decision wisely (see Sophotaxis). Architects and engineers can provide valuable cost-benefit advisement, but final decisions should be made by informed stakeholders.
The example above is pretty simple, but large business databases can have thousands of similar cost-benefit scenarios that are far more complex. The more complex the situation, the greater the need for ownership perspective and expertise in the specific business issues at hand.
As an example of reason 2, consider a simple table containing only a single column for phone numbers. Most database engineers would agree that this table satisfies at least first normal form as-is, although phone number would typically be incorporated into a larger table. That would be fine and correct in most cases because most organizations treat a phone number as a single piece of information which is usefull for making telephone calls. But for a company such as a telecomunication service provider, where users might want to group or sort phone numbers by area code or exchange prefix, this table would not be in first normal form, or even zero normal form. In other words, it would not satisfy even the minimum theoretical standard for a database table because a column contains more than one kind of information. Instead, in that particular case, the numbers should be broken out into their meaningful components as shown in the lower image; this will allow the information owners to use the information the way they want to use it.
This shows that even in very simple cases the normal form of a table can be determined only by someone who understands how the information will be used. Business databases are filled with situations like this, but unfortunately most decisions about organizing information are made by technologists rather than business experts, and communication between the two parties rarely extends to such detail. This is a main reason organizations struggle to manage information effectively, and why most information systems cannot adapt well to changing requirements with reasonable cost and effort.
Information should be organized in a way that makes the most sense to its owner, not according to some generic normal form. The normal forms are usually seen as guidelines to protect consistency or improve performance. But any decision made to improve performance can potentially degrade the usefulness and value of the information, and the set of decisions needed to protect consistency can only be determined by a business expert.
A far more useful and comprehensive tool for thinking about the way information should be organized is the principle of sophotaxis, which I explain in The Ancient Secrets of Information Management.
For related discussion see:
A Key Assumption About Information Systems is False
A widely held assumption in the academic field of business information systems is that technology and behavior are inseparable, but this is false. One widely-cited source that promotes this idea is Design Science in Information Systems Research, where the authors state:
"Technology and behavior are not dichotomous in an information system. They are inseparable (Lee 2000)"
The source they cite (Lee 2000) explains their rationale:
"The problem of 'technology vs. behavior' is a dilemma in the following way: If we take a technology approach to IS, then how would we be different from engineering and computer science? But if we take a behavioral approach to IS, then how would we be doing research that any behavioral field could not already do?"
"Just as a physician cannot design a remedy for a patient’s body and emotions separately, and just as an architect cannot design “form and function” independently, the IS field similarly does not have the option of designing the technology subsystem alone or the behavioral subsystem alone – we have to design both together."
My response to the first paragraph is this: Deciding how business information is organized has nothing to do with engineering and computer science and everything to do with management priorities and objectives. The role of technology professionals should be limited to implementation and cost-benefit advisement. This should be an easy distinction to make, but the conventional wisdom is clouded by historical and cultural biases. For example, technology workers in the 1950s persuaded business leaders to think of information management as an engineering discipline instead of a management prerogative. This is discussed further in The Legacy of the Systems Men. There are plenty of purely 'behavior' related issues that the academic IS community could focus on which have nothing to do with technology, such as the problem of poorly-organized information, which I describe further in Sophotaxis.
My response to the second paragraph is that it is false. Decisions about how information is organized can and should be made before any automated system is created, even when those decisions can change (and they always do). The design of the automated system might raise cost-benefit issues that will impact the information decisions, but those cost-benefit trade-offs cannot be well-understood until decision makers know what the trade-offs will entail, and that is possible only when the information resource is defined first (see Cost-Benefit Value is Ignored).
When business-oriented priorities are subordinated to technology-oriented factors without a deliberate cost-benefit analysis, operational and analytical capabilities suffer. Organizing information is an act of business administration, which is a role IT departments are not intended for. The role of IT should be limited to implementation and cost-benefit advisement.
The following statement is evidence of how deeply the issue of information management is misunderstood. It has been repeated in various forms more than ten thousand times by authors at universities, technology companies, and government organizations:
"With the proliferation of information technology starting in the 1970s, the job of information management had taken a new light, and also began to include the field of data maintenance. No longer was information management a simple job that could be performed by almost anyone. An understanding of the technology involved and the theory behind it became necessary. As information storage shifted to electronic means, this became more and more difficult."
This statement is false; information management has never been a simple job that could be performed by almost anyone – at least not since the birth of modern accounting in the late 13th century. The techniques used in manual accounting systems rely on a set of cross-referenced and interconnected books which use structures and rules that are fully consistent with the theory behind modern relational database systems, which I explain further in The Ancient Secrets of Information Management.
Accounting is the discipline of managing information about money. The same logic-based techniques can be used to manage other kinds of information as well, but manual accounting is so painstaking and time consuming that it is easy to understand why early merchants only made the effort with the kind of information they considered to be most important. When relational database software was introduced in the 1970s, business leaders and scholars mistakenly assumed that it had created an entirely new computer-based method to organize information. But in reality it created a new computer-based way to automate old logic-based techniques that had been used with success for 700 years. If this were understood the shift to electronic automation would have made things far easier rather than more difficult.
Early merchants certainly did not look to the craftsmen who made their tools to also define their accounts. But that is exactly what modern organizations do. It makes no difference that the old tools were made from paper, feathers and dye, and the new ones from computers, software, and networks. The old tools served exactly the same purpose as the new with respect to the organization of information. The new tools serve an additional purpose of automating processes and workflows, but that is no reason to believe that the engineers who create the tools should also be responsible to organize information. And there are important reasons to understand why they should not.
Information is not Technology
Managing information is the most difficult and costly operational challenge facing most businesses. At the root of the problem is a failure to recognize the distinction between information resources and technology resources. To their detriment, businesses treat them as the same thing. My evidence for this claim is that no distinction is ever made in the requirements expressed for either, or in the way each is managed. They are delivered and maintained by the same people and no distinction is recognized at any point in the lifecycle processes of either type of resource. Information resources are mistakenly treated as components of automated systems.
As a result, some of the most important management decisions at every level of enterprise organizations are unwittingly delegated to technical specialists instead of business experts. Efforts to address the resulting problems without addressing the root cause only make the problems worse. It is a vicious circle that creates thick layers of artificial complexity in the form of initiatives, roles and processes which lead to additional costs and complexity. The only way to solve the problem is to recognize that information resources are not the same thing as the technology-based tools used to access and maintain them. Businesses must develop a capacity to determine and express requirements for information resources separately from those of automated systems.
Read more at The Ancient Secrets of Information Management. To learn more about organizing information to meet the needs of its owners see Sophotaxis,
Information is the Ultimate Business Resource
Information is the sine qua non of all commerce – a status not even money can claim. Money is, after all, a form of information.
A business resource is anything that brings value to a business. Classical economists described business resources in terms of factors of production. Land, labor and capital are the primary factors because they do not become part of any finished product and are not consumed or significantly changed by the production process. Resources such as raw materials and energy are secondary because they are derived from the primary factors. From the classical perspective even things like entrepreneurship, intellectual property and the time value of money are derived from labor and capital, so they too are considered secondary formulations of the primary factors.
So where does information fit in? Information is obviously an important business resource, but is it a primary factor or secondary? Or is it something else?
Information is consumed in the production process but not in the sense that it is depleted or reduced; in fact new information is created by every act of production and commerce. Further, information is non fungible, which means it cannot be substituted one unit for another such as a kilowatt of electricity, an ounce of gold, or a computer. Information cannot be replaced the way a building or an executive can be. No business resource can be effectively utilized without information.
For these reasons information must be acknowledged as superior to every other business resource. It is more primary than the primary factors. Information is the sine qua non of all commerce – a status not even money can claim. Money is, after all, a form of information.
As late as 1946 there were in the combined professional, technical and scientific press of the United States only seven articles on the subject of information
So why did the classical economists not have anything to say about information as a factor of production? My guess is that information is so essential to every aspect of commerce that until the mid 20th century it was not even recognized as a distinct resource class. In 1963 a professor of management noted “As late as 1946 there were in the combined professional, technical and scientific press of the United States only seven articles on the subject of information" (see here).
Information is like the air we breath – nothing can happen without it, but it is easy to ignore until you have reason to notice.
An information resource is information organized for some purpose. It can take the form of anything from a memorized telephone number to the Library of Congress or the entire internet. The following table lists various types of information resources, how they are organized, and what they are useful for:
|Information resource:||Organized by:||Useful for:|
|File cabinet||Drawers with alpha or numeric sorting||Manual document retrieval|
|Novel||Sentences, paragraphs, chapters||Entertainment, relaxation|
|Library||Subject, author||Finding publications|
|Relational database||Tables, columns, rules, relationships||Flexible storage, retrieval and analysis|
|XML file||Tags, nested hierarchies||Transporting and sharing data|
|Big data||Key-value pairs||High-volume capture and processing|
|Semantic ontology||Triples (subject, predicate, object)|| Making information discoverable
The way information is organized determines how it can be used, so decisions about the organization of information should be carefully considered by the owner and managers of the resource. Unfortunately, owners and managers usually only provide high-level guidance, and the actual decisions about the way information gets organized are instead delegated to an architect or technical specialist. This is a costly mistake with long-term consequences. The outcome is almost always an information resource that cannot be used the way its owners intend without being modified for every newly desired use.