Thursday, July 23, 2020

Why does everyone Care who's Jewish?

I actually wrote this over 2 years ago, but just found it in my drafts. I think I might have tweeted about it though. When formatting this as bad HTML, I noticed some mistakes in the ratios, so I fixed them. I assume the data is still good. Anyone can try to verify if they're interested.

Anyway...:

I've wondered for a while, based on probably flawed confirmation bias, that Wikipedia might have a bias of people wanting to know if other people are Jewish.
My brain made the connection that the "early life" heading was really the "is this person Jewish?" heading.

So, I've done some Google-fu to try to get a reasonable answer, the lazy way.

Search terms vs results on Wikipedia:

Search terms"Jewish"-"Jewish"
"Early life"1.08m14.2m
-"Early life"61.6m866m

That's a ratio of 57:1 of Wikipedia pages mentioning the term "Jewish" having no Early Life section to those that have an Early Life section, and 61:1 for non-Jewish-mentioning pages. That's probably not statistically significant. Let's assume it's not.

Ok. That's not too bad. Oh wait... That also includes lots of pages that aren't even about people.
Now let's filter out some noise by adding the search term "living people" which is a category for... living people.
Search terms"Jewish"   "living people"-"Jewish"   "living people"
"Early life"53.3k141k.
-"Early life"217k12.3m

That's a ratio of about 4:1 of Wikipedia's "living people" pages mentioning the term "Jewish" having no Early Life section to those that have an Early Life section, and 87:1 for non-Jewish-mentioning pages.

So, what's up with that? People really seem to want to know if people are Jewish.

Friday, July 3, 2020

Prank Defeated - Formerly: Introducing Inference: A Neural Network Powered Programming Language

OK, a few months ago, I wrote this post, as a bad joke, and scheduled it to be published on April 1st, 2021. Since then, I've had a chance to play around with OpenAI's AGI playground, and while I've only produced some fairly funny interactions and other basic uses, other people are doing some crazily impressive stuff with it, like this:



So I'm posting this thing early, because why not, but instead of it being a prank, it's more a prediction of the future.


Inference has been created to define a new paradigm of programming languages.

Where all previous languages required programming using a rigid syntax, where semantics were precise, and no leeway existed, Inference uses the power of neural networks to infer meaning from the programmer.

The basic model works like this:

Programmer writes code however they want

For example:
  1. var x = position + 8
  2. add a margin of 8 to the position - keep track of that for me (call it x)
  3. let a new variable (let's say x) hold that variable I just used, plus the fixed margin

Training

The neural network ingests the code that's been written, and based on its assessment of certainty of each statement (taken individually, and in a sampling of broadening contexts), quizzes the programmer for meaning, generating educated guesses about the meaning, as well as allowing the programmer to correct it.

For example:
Regarding let a new variable (let's say x) hold that variable I just used, plus the fixed margin, did you mean:
a) int x = position + fixedMargin  [92% certainty]
b) int x = position + [?? supply reference to "fixed margin"] [98% certainty if reference supplied]
c) int x = position + top [86% certainty as fixedMargin used after top]
d) [Let me know more precisely what you meant]
As this is a trained system, the more consistently the style is written, the easier it will be to train. That might make it seem pointless, except it is able to store multiple profiles, for each developer, meaning multiple programmers can each use their own style, and train the model to understand what they each mean.

Performance

So far, this is in alpha, but with just 3 days of training on pseudocode written by 3 developers, it has been able to understand, compile and execute a simple Pong-style game, written in plain language by those same developers.

More info

Code and infrastructure config will be released soon.
Tweet at me for more info.

Monday, December 16, 2019

Atheism vs Agnosticism: A Brief Explainer

I had a brief Twitter exchange with a well-known poker player, and realised that he seems to be religious. My default position in any exchange is an assumption that people are atheists. I guess that's because I am one myself, but it also seems to me to be the overall default, given that everyone starts their lives as atheists, with no beliefs about god, or television, or sports, or anything, and it's only through exposure to religious ideas that people become religious. It's why "Christian" countries tend to have more Christians, and "Muslim" countries tend to have more Muslims.

So, I knocked up the following image to explain how Gnosticism (knowledge) differs from Theism (belief) in god(s). I've seen similar things in the past, and the examples are extremely approximate, but it's probably good enough to be able to send people to this link when they ask about my thoughts on it.

Two dimensional spectrum of Theism vs Gnosticism

The only other thing to really talk about here is that, unlike, say, the common 2-d political compass (which I find to be too naive), belief and knowledge actually could be drawn on one dimension, because all knowledge is really a subset of belief, and it comes down to what presuppositions a person has. But my presuppositions include solipsism being pointless, and that the scientific method is an appropriate way to understand whatever can be understood, so I'm ok with treating knowledge as categorically different to beliefs, even when some purported knowledge turns out to be false, because it's more reliable than a world in which everyone has their own facts.

Friday, June 22, 2018

A Gotcha in Variable Initialisation in Golang

I'm new to Golang development, and there are lots of things I consider weird with the language. I've just discovered this gotcha, luckily before any code went live, so I figured I'd post about it.

Variables of the same name can be declared at multiple scopes within the same function, and as functions can return tuples, it's possible that, when declaring and initialising variables at the same time, you intend to actually reuse a previously declared variable.

For example:

  x := 1
  y := true
  if y {
    z, x := DoSomethingThatChangesX(x)
    fmt.Printf("Output: %d, %d", x, z)
  }

  fmt.Printf("If x is 1, oops: %d", x) // x == 1


The output of this will show that the x that is set as an output of DoSomethingThatChangesX will not be the same x as in the outer scope.

This next version shows how the code can be changed, in quite an ugly way, to avoid the gotcha:

  x := 1
  y := true
  if y {
    var z int
    z, x = DoSomethingThatChangesX(x)
    fmt.Printf("Output: %d, %d", x, z)
  }

  fmt.Printf("If x is 1, oops: %d", x) // x != 1


By separately declaring z before initialisation, z and x can both be assigned a value separately to declaration.

I know this will come across as pretty simplistic, and is obvious when you think about it, but I still think it would be really easy to increase some subtle bugs, which even careful eyes might not spot, because of it.

Monday, October 16, 2017

A Challenge for Homeopaths

Homeopathy is a body of "knowledge" wherein it is believed that ingredients that cause a symptom when consumed in full strength can be used to treat health issues that have those same symptoms.

At least some homeopaths cause patients to delay real treatment for diseases like cancer, and thereby to die.

The discipline (in the sense of its adherents being disciples) was created, entirely out of whole cloth, at a time when placebos could conceivably be better than the real medical treatments of the day. That's one proposed reason for its rise in popularity.

Not coincidentally, homeopathic remedies require extreme dilution, to the point where the mathematical modelling demonstrates that the remedy contains none of the original active ingredient.

Homeopaths claim that a higher amount of dilution, even well past the point where there is nothing left but water, boosts a remedy's strength and effectiveness.

I'm sure this is an overly simplistic summary of homeopathy. Just as there are entire universities where people study for many years in theology, the more fictional a structured field of study, the more esoteric the knowledge must become. Much of the body of knowledge and study must be focused on learning and developing "outs", i.e. the ability to shift goalposts in order to keep the main hypothesis unfalsifiable.

So, based on this take of homeopathy, here is my challenge:

You need not use a remedy to successfully treat a disorder, because that would allow for the aforementioned goalpost shifting ("it doesn't work if you don't believe").

I will merely supply several samples of homeopathic remedies. They will all be samples from real purveyors of homeopathy. For each sample, you need only tell me what the active ingredients are, and at what concentrations.

You can use whatever equipment you would like in order to do the analysis. You can use the samples on people - even true believers - to measure the effect; anything you like as long as the challenge is conducted with your ignorance of the source samples.

In the spirit of James Randi's million dollar challenge, and in light of my inferior financial position (given the need for escrow), I'd suggest a $1000 prize, or perhaps I can crowdfund the prize money, so you can win from lots of non-believers.

Either way, it will be worth your while to merely prove to me that your career isn't a giant fraud, and that you didn't waste all those weeks at the Unaccredited University for Fictional Studies.

Are you up to the challenge?