I've been messing around with the SEP data again. Updates are now on Github (github.com/adamdedwards). Here's a sample of the updated graphs:

So, here's a bad chart:

The source describes a real and important phenomenon: Americans are broadly and significantly wrong about crime rates. But the chart. The chart is bad.

## Why is the chart bad?

The central issue is that the double y axes are representing drastically different variables. Generally, putting two lines on the same chart where the scale is the same is good. Both scales go from 0 to 100. But when just one scale is a percentage it actually masks the very different variables being measured here.

What are those variables? The orange line is the national violent crime rate, from 0 to 100. Out of what? 1,000? 100,000? 1 million? A little searching suggests that the rate is out of 1,000 but you wouldn't know this here.

The blue line is the "percentage of survey respondents who said that crime has increased locally over the past year." Here's the thing. People can absolutely be correct that the crime rates have increased *locally* while they drop nationally. In fact, *almost everyone* could say correctly that crime rates have increased locally while they decrease nationally, as long as crime rates are decreasing significantly enough in certain parts of the country. This isn't a perception gap if people are correctly identifying local trends while national trends go in the other direction.

Even if we (incorrectly) interpret the blue line to represent what percentage of respondents say crime is increasing nationally over the past year, there are several years where people are correctly identifying crime rates. From 2005 to 2006 and from 2010 to 2012 *crime does increase nationally.* So survey respondents who say crime rates are increasing over the last year in 2006, 2011, and 2012 are correct! There's a perception gap here, but it's on the part of survey respondents who *deny* that crime rates are increasing.

The large yellow area ominously labeled "PERCEPTION GAP" suggests that there's some permanent (since 9/11, roughly) disconnect between what Americans believe the crime rate is and what it actually is. And this is likely true. But the chart doesn't show that!

There are still plenty of interesting questions to ask about these lines and the gap between reality and our perception of it. Why does the blue line spike and drop when it does? Why does public perception seem to restabilize and basically flatten after 2005? Is the blue line bump in 2009 due to Obama's inauguration (and the racist backlash that followed) or is the blue line just delayed by a few years and the bump is due to the *actual* crime rate bump in 2006?

I'm also in broad agreement that Americans' views about violent crime are way off-base. However, charts like this one give us the *appearance* that something exists while actually representing something else entirely. That's a more worrisome perception gap.

The fall semester is finally here, and with it all of the excitement, terror, and new stationery that accompanies every new academic year.

I wish my students and colleagues the very best this semester.

I've been reading a lot about trivalent logics to help with understanding Peter Vranas' work on a three-valued imperative logic. One thing in particular that I'm puzzled by is how we should understand negation in a trivalent logic. In this post I'm going to write some thoughts on this problem.

Consider that there are only six out of the 27 possible unary truth functions that are non-degenerate. By non-degenerate I mean that they do not reduce the number of possible outputs, given an arbitrary input. These are as follows:

INPUT | IDENTITY | 1.1 | 1.2 | 1.3 | 2.1 | 2.2 | |
---|---|---|---|---|---|---|---|

- | - | - | 0 | 1 | 0 | 1 | |

0 | 0 | 1 | - | 0 | 1 | - | |

1 | 1 | 0 | 1 | - | - | 0 |

It's a little simpler to tell what's going on here when you graph the relationship between the truth values. Graphically, the idea is that if each truth function is a relation between 3 truth values, then these six are the only relations where there is only one incoming edge and one outgoing edge for each truth value. In the case of identity and 1.1-3, some of these are the same edge.

Kleene, Priest, and others interpret 1.1 as negation (depending on how we interpret the truth values FALSE and UNKNOWN) but I'm not sure this is right. For one thing, this is what lets us preserve double negation elimination as a theorem/rule and with it DeMorgan's laws, etc. I think for a genuinely trivalent logic we should interpret either 2.1 or 2.2 as negation.

We can start to see the effects of this interpretation by looking at the binary truth functions. Since there are almost 20,000 of them in a trivalent logic, we're going to have to rely on symmetries to help make sense of things.

1 | 2 | 3 | 4 | 5 | 6 |
---|---|---|---|---|---|

000000 | 000000 | ------ | 111111 | 111111 | ------ |

0----0 | 011110 | -1111- | 1----1 | 100001 | -0000- |

0-11-0 | 01--10 | -1001- | 1-00-1 | -1001- | 01--10 |

0-11-0 | 01--10 | -1001- | 1-00-1 | -1001- | 01--10 |

0----0 | 011110 | -1111- | 1----1 | 100001 | -0000- |

000000 | 000000 | ------ | 111111 | 111111 | ------ |

This is a collection of the characteristic truth tables for 24 binary truth functions in trivalent logic, divided into six groups of four, based on symmetries between them. Starting with the leftmost set of four (labeled '1'), the top left 3x3 square of truth values is the characteristic truth table for conjunction. Moving counterclockwise in that set of four, we see the characteristic truth tables for conjunction, nonimplication, sheffer stroke (nand), and converse nonimplication. In the fourth collection we have peirce's arrow (nor), converse implication, disjunction, and implication.

Likewise, the relationship between the unary function 1.1 and the binary functions in the first and fourth collections is as we would expect. Negating one or the other inputs rotates around the collection, and a negation in front of the function moves us to the other collection (and, importantly, back again).

What about the other collections of binary truth functions? These are truth functions that we can produce by applying the unary functions in 2.1 and 2.2. These, in some sense, rotate us through the pair of characteristic truth tables in (5 and 6) and (2 and 3), respectively.

OK, so what does all this show us? Honestly, I'm not sure. But I think that these truth tables show that there is a kind of symmetry that would give us two unary truth functions that capture many, and perhaps all, of the properties of negation that we care about. But instead of negation, in a trivalent logic we have left-handed negation and right-handed negation, or something like that.

These "handed" negation functions would have some interesting properties. First of all, they would cancel each other out:

For any proposition P, LRP = RLP = P.

Also, they would obey a triple negation elimination rule:

For any proposition P, LLLP = RRRP = P.

This would imply a modification to the DeMorgan's Laws for operator duality. Duals would no longer be defined in the typical way, and instead each operator would have two intermediate stages (not represented in the truth tables above) that it would have to pass through on the way to its classical dual.

I'm going to have to spend some more time on figuring out what these intermediate steps are, but it seems plausible to me that this is a more thoroughgoing trivalent logic than the traditional interpretation.