For each of these classes of algebraic structures, I'm going to ask the questions, could equational probabilities exhibit a gap, and could they be reverse well-ordered? Note that for most of these that just means equational probabilities in the finite case; the compact case doesn't make sense for most of these. So typically I'll only be talking about the finite case, but I'll make notes about the compact case when it's defined and relevant. For each class of structures, I'm going to produce a judgment of True, Plausible, Plausible?, Plausible??, or False; this may be broken down further (based on well-ordering vs gaps or finite vs infinite) where relevant -- note if I say something is false for well-ordering, with no further notes, I'm implying that I consider gaps to also be implausible unless I specify otherwise. (Also, varieties with no nontrivial finite (or, when applicable, compact) examples will be labeled "trivially true".) We won't consider the question of whether these sets are closed, because we already know that's false for groups if you only allow finite groups (xp=1 for p an odd prime is a counterexample).
(Also, any time topologies do come up, everything is implicitly Hausdorff; I'm not considering anything that isn't.)
Which classes of structures am I including? That's basically a judgment call. But for simplicity I am going to restrict my attention to finite-signature, finitely-axiomatized varieties. So, no fields (not a variety), no vector spaces over R (not a finite signature), etc. All ring-like algebras will be considered over Z, not over a field! Also, whenever I consider "differential" anything, I'm always considering with a single derivation added, not multiple, for simplicity. Anyway, if you want me to add one, you'll have to give me a reason ("I have a counterexample and it isn't one you're already using" can be a reason, although if it's too obscure that may not be sufficient).
Please send any corrections, updates, or additions to (if you can't see that address, that's because it's been obfuscated with Javascript to fool spammers).
I should stress that it is very possible that many of these classes have easy counterexamples that I've missed! If you know of them, please send them in!
Also, I'm beginning to wonder if I should be thinking more in terms of "polynomials" (as the universal algebraists call them), i.e. allowing arbitrary constants even if they're not in the signature, rather than term functions (only allowing stuff from the signature). Though polynomials raise some questions of how you'd even do them anwyay. Maybe do term functions but have some of the variables designated as "whatever you plug in here gets regarded as a constant rather than as a variable"? Hm. Whatever, I'm just doing term functions. Finally, some of these terms have multiple inequivalent definitions... in case of ambiguity, I'm always using the notions defined on Wikipedia (as of 2 January 2022). Anyway, on to the list!
Also, I'm not going to make a separate header for this, but we can also possibly consider groups with one or more automorphisms (or just endomorphisms) attached, possibly satisfying identities. For instance, the results that are known for a²=1, are actually known more generally for σ(a)=a-1, for σ an automorphism satisfying σ²=1. These seem plausible too?
Well, removing things makes things easily false. For a counterexample, let's take a finite set S, designate one element as the identity and another as the sink, and simply define the sum of any two non-identity elements to be the sink. Our equation can then simply be a+b=c+d; as |S| increases, the probability of this being satisfied will approach 1.
This example illustrates an important point -- in order to avoid easy counterexamples of this form, structures have to be sink-resistant! This usually means having some form of inverses, but not necessarily, as we'll see later.
(Another nice counterexample, not relying on the failure of sink-resistance, is a²=a; you can take the set {0,1,...n} and define ab=max(a,b) except that n*n=n-1.)
For instance, Moufang loops satisfy Lagrange's Theorem, but for complicated reasons instead of easy ones. I consider this a good sign.
Moreover, every Moufang loop can be derived from a group with triality, which is a group with two automorphisms attached satisfying certain identities, in such a way that if groups with triality obey well-ordering, then so do Moufang loops. And as mentioned above, it seems plausible that groups with automorphisms attached (satisfying specified identities) could obey a well-ordering law? We already know abelian groups with automorphisms attached do so...
I've written more about these here.
(Also, I'm unclear on whether it's possible to define Haar measure for arbitrary compact Moufang loops...)
But... Bruck loops and gyrogroups apparently do still satisfy Lagrange's Theorem. And Bol loops aren't known to, but they also aren't known not to, and it is known that the order of an individual element necessarily divides the order of the loop.
This is getting onto pretty thin ground here, but I'm going to call these plausible?
Our equation will be a²=1 Basically, we use the well-known construction that turns a group G into a twice-as-large Moufang loop M(G,2); if instead of putting in a group G, we put in an inverse-property loop L, then the output will again be an inverse-property loop. If we start with, say, the cyclic group C4, we can iterate this construction to drive the involution fraction closer and closer to 1. (Indeed, at each step all the new elements added are involution, so only two elements -- the two elements of order 4 that we started with -- won't be involutions.)
Note that this is a class of structures where we got a definitive false answer even though it's sink-resistant. Sink-resistance is necessary, but by no means sufficient.
(We'll see another counterexample for inverse-property loops below.)
(By the way, inverse-property loops also don't satisfy Lagrange's theorem. Here's a nice example: Take quaternions over F3, then restrict to {0,±i,±j,±k}, then define a★b to be a+b if and b are parallel and ab otherwise. This is an inverse-property loop with 7 elements, but it has subloops of order 3.)
(Note that the associated sloop is associative, i.e., satisfies this equation for all inputs.)
Notice how this example is the reverse of the previous one; in the previous example, we used the fact that the 2-element sloop is associative, to conclude that any power of it is associative, and that therefore the associated squag is nearly associative. In this example, we used the fact that the 3-element squag is self-distributive, to conclude that any power of it is self-distributive, and that therefore the associated sloop is nearly self-distributive.
(Yes, any sloop is also an inverse-property loop, so this also serves as counterexample for those, but I'm going to leave up the existing counterexample for those because I like it. Note by the way how this again violates Lagrange, since these loops have order 3n+1.)
Hell, we can even consider the compact case and define Haar measure here -- the different choices of abelian group structure will all yield the same Haar measure, which will also be invariant under both of our endomorphisms. So this is true (including for the compact case).
This example may seem a little unexpected; quandles archetypically come from groups, they have a fair bit of bijectivity, plus they have significant additional structure (the self-distributive law). But the thing to note is that that injectivity is only on one side, and this counterexample exploits that. It would be nice to have a counterexample that exploits it more explicitly, like the sink-based examples used elsewhere, but this one still demonstrates the principle.
...on the other hand, that isn't really a sufficient explanation. After all, this non-injective operation is constructed out of group multiplication, and we know that with this particular equation if we considered only whole groups, we would get well-ordering. So something else is going on here; it really matters that this set is only closed under conjugation, not group multiplication. Why would that be? I have no idea.
That said, if you want a more purpose-built example that runs more explicitly on this principle, here's one: Take a finite set, designate one element as the swapper, and two others as the swap-pair. Define a⊲b=b⊳a=b unless a is the swapper and b is in the swap-pair, in which case you get the other element of the swap-pair. This is also an involutory quandle and is a counterexample with the same equation.
Note of course that it will necessarily be true for any equation that doesn't use the multiplication, because then you just have an abelian group with a constant. But those are trivial cases.
Some interesting nontrivial cases where you can easily prove the existence of a gap are ab=ba (for rings/rngs, including in the compact case; the ⅝ law reoccurs here), and a²=a (for commutative rings/rngs, not including the compact case). Indeed, for the latter, you can even prove well-ordering, because it's not too hard to determine all the possible values (they're numbers of the form 2k/n, where n is odd and k≤Ω(n), where Ω(n) is the number of prime factors of n, counting multiplicity).
There's also so many uninteresting and yet not-quite-trivial cases where you can prove a gap for what I would consider to be stupid reasons; think ab=1 in commutative rings.
Commutative rings, really, feel a bit like fields with their whole thing where a polynomial (in any number of variables) is either 0 everywhere or else is zero on only a tiny fraction of the domain; yes, obviously general commutative rings are a long way from fields, but it still seems like there is enough of something like that going on here to make this plausible.
My example for idempotent semirings (and commutative semirings) actually isn't sink-based; likely it's possible to come up with a sink-based one, but I haven't bothered. Let our base set be {0, 1, ..., n}, and define a+b=max(a,b), and define ab=max(a,b) if either a or b is nonzero, and ab=0 otherwise. Then we can use the equation a+b=ab.
Note also that this example is commutative; and also that if we define a*=max(a,1), it becomes a Kleene algebra, but Kleene algebras don't form a variety, so they don't get an entry here!
As for Conway semirings (and star algebras), here we'll use a sink approach; take {0, ..., n} with saturating arithmetic, and define a*=n if a is nonzero and 0*=1. Then the equation a*=b* does the trick.
Well, uh... I still have no counterexamples. Maybe having an abelian group with a bilinear operation is enough? I'm going to call this plausible? Note that the ⅝ commuting law does still apply to these, it doesn't require associativity.
Take any abelian group for the additive group. Pick one of the nonzero elements of the group to be 1; then define ab=0 if b≠1, and a1=a. We can then use the equation ab=0 as our counterexample and let the underlying additive group get arbitrarily large.
(Another similar example is that ab=a, where you use the same construction as above, except you don't pick a 1, and define ab=a if b≠0 and a0=0.)
Still, we can take the additive group to be an elementary abelian group of order 2k and use the first construction from the previous case, then adjoin a new identity 1 that satisfies 1+1=0; we then again use the equation ab=0. As k increases, our probability here doesn't tend to 1, but does tend to ¼ from below, so reverse well-ordering is false.
(We could possibly generalize this to "group with one or more 'multilinear' operations" attached...?)
If I'm not mistaken, the infinite case is similar, only you'll get a probability of 0 for equations that have a probability of ½ over {0, 1}.
Also, since this is example is modular, it is automatically also orthomodular.
Of course, an orthocomplemented distributive lattice would be a Boolean algebra, and we know those are true. But we can try distributivity with some weaker complementation requirements...
We can then use the equation a∧¬a≤b∨¬b (which I've written as an inequality but obviously can be written as an equation) to get our counterexample; it will hold unless {a,b}={0,0'}.
These, in addition to the near-ring cases above, are a good example of how adding an additional operation to a structure, allowing more expressive equations, can change it from plausible to false, or true to false; we'll see more of this shortly.
Well, I don't know how to disprove a gap, but we can modifiy our previous example to disprove well-ordering! Just define ¬0=0' and ¬0'=0; then we have a Kleene algera, and the probability of a≤¬a (which again can also be written as an equation) is just under ½ (since it holds precisely for the negative numbers), approaching it from below, showing that these are false for well-ordering.
So, it looks like for complemented lattices, you may need the full power of a Boolean algebra to get well-ordering.
Yeah, unlike complementation, pseudocomplementation lacks any sort of sink resistance, so we can just use ¬a=¬b as our counterexample.
For our counterexample, we can take the Boolean algebra of subsets of a finite set S, and define A•B=A∪B if A∩B=∅, and A•B=S otherwise. Then of course we correspondingly take I=∅, and A/B is the set difference A-B (I'm avoiding the backslash notation for obvious reasons here). We can then use the equation a•b=1, since the probability of two random subsets being disjoint goes to 0 as the size of S increases.
(Note also this counterexample is commutative.)
Let's start with the counterexample to well-ordering. Our equation will be x⊕x⊕x=1, and our MV-algebra will be MV3n-1 (that is to say, the set {0, 1/(3n-2), 2/(3n-2), ..., 1} with saturating addition). Then the probability of this equation being satisfied is just below ⅔ and approaches it from below.
So if well-ordering doesn't work, why do gaps? Well, finite MV-algebras have a quite restricted form. Consider that all MV-algebras come from a lattice-ordered abelian group (together with a designated element). For a finite MV-algebras, we can say even more; the lattice-ordered abelian group will have to be discrete (meaning, satisfying the descending chain property), which in this cases forces it to be of the form Zm. The result is that a finite MV-algebra must be a finite product of MV-algebras of the form MVn; so it would suffice to prove a gap law just for MVn.
To prove this law, observe that as n→∞, the probability over MVn has a limit, namely, the probability over [0,1]. If this limit is smaller than 1, we obtain a gap (or else there would be another limit point). If the probability is equal to 1, then the equation holds almost surely over [0,1]; this means it holds on a dense subset of [0,1]m, and since the set where it holds must be closed, that means it holds on all of it, i.e., the equation is simply true over [0,1]. But then it is also true over each MVn as they embed in it.
I'm a little unclear if it's possible to define the infinite case, as mentioned above; if it is possible, though, I don't see how you'd prove a gap in such a case. But who knows?
I view this as something of a red herring -- yeah, it's true, but I don't see that telling us anything useful about what we really wanted to get at here.
The restricted form these take means that any probability we get will be of the form 1/n, so that gives us well-ordering, but... I can't see this as really being relevant to anything we care about.