Sunday, December 27, 2015

We conceive of limits to variation in animal behaviors and morphology. When we deduce something about the constraints acting on the development of an organism and on the evolution of a species we must distinguish between two kinds of observed limits. The first kind is a limit due to actual biological limitations, whether on embryological development, lack of viability (i.e., animals having the variation die before they can reproduce), or lack of fecundity. The second kind is a limit on what we as scientists can actually observe due to lack of a fossil record either for the animal as a whole or for the specific variation in question, or due to limitations in our methods or devices. The upshot of this difference is that the first kind of limit is fit for building our philosophy of nature while the second kind is misleading and has the power to render our predictive tools worthless.

There are, of course, mitigations to the perfidy of the second kind of limit. For instance, we can determine in some cases that a variation, such as a distinctive birdsong sung by an ancient bird, would not be visible to our tools. In addition, constraints determined in one area may depend on constraints in another area which confound limits of the second kind. Ultimately, these ways of thinking lead to the practice of integrating heterogeneous models of living systems into a single model. The hope is that there is a dependence between parts of the system which constrain those variations which cannot be constrained through observation. In the limit, this approach frees us entirely from the "black swans" since it produces an emulation of reality. In the real world, however, where computational limitations prevent perfect copies of reality, there may yet be independent pockets of variability that we will always fail to pick up either in observations or in our models.

~~~~

Friday, December 11, 2015


Colored Square. © 2015, by Mark W.
~~~~

Monday, July 20, 2015

Kinds of tests

These are my thoughts on software testing terminology. Some of these tests have definitions that float, and I want to get down my definitions, so that I can refer to them and refer people to them if there's any uncertainty about what I mean.
Integration test
A test of the boundary between components. An integration test in project A for components B confirms that the interface A expects of B matches up with the interface that B provides. Such tests make sense whenever component B is not available to A's developer at the time that she writes A.
Unit test
A test of the primary functionality of a component and of the edge cases and error modes of the component. The component for which a unit test is written should not have untested sub-parts -- in other words, the unit test assumes that below the level of abstraction at which the test is performed, all components perform perfectly. It may be necessary to make "perfect" sub-parts (e.g., "mock" objects) to effectively test at the desired level of abstraction.
Acceptance test
A test of a component against customer requirements. These are tests which must be passed for acceptance of the component by the customer.
Formal test
A test of the mathematical abstraction of the software component against known failure modes which can be formally proven to occur, not occur, or possibly occur under a class of inputs.

~~~~

Sunday, July 19, 2015

Human communication protocols. You can't tell what type PDU you deal with sans reading a variable number of headers, the offsets for each header encoded, in prose, by a randomly selected preceding header. Then, when you finally get to the damn payload, too often there's virtually no content. Absurd.

~~~~

Saturday, June 13, 2015

TV show substitutes for "fuck":
  • Frell - Farscape
  • Frak/Frack - Battlestar Galactica (and many, many others)
  • Rut - Firefly (actually refers to "a recurrent period of sexual excitement and reproductive activity" in modern English. Collins English Dictionary)

~~~~

Saturday, May 9, 2015

Where terrorism is taking us

Consider that each attack demands that we become more careful of what we say and of what is published. We fear our fellows and their heritage. Our fundamental freedoms are restricted by the eroding influence of the attackers. Seeing the ways that the freedmoms provoke the terrorists, authority figures say, "We must take precautions around all free expressions which may offend the terrorists." The "may" allows room to expand the presence of protections -- never to contract. The terrorist will not moderate his stance by saying, "Such and such class of action is no longer offensive," but with each additional attack on a previously putatively excluded action, the fear makes a retraction by us, and not the terrorist, most natural. So, an unfettered growth in the presence of free-expression-police seems to be in effect. These police have as their mission to defend those freely and legally expressing their opinions from terrorists. Could these figures, however, be perverted such tthat the police inhibit free expression to pre-empt potential attackers? This danger could manifest in other ways, but in particular I think of the asymmetry of possible attackers to defenders, the definitional foundation of terrorism. That is, an attack will be effective with any success and most mentions of attacks among us, but police cannot continually protect all targets. In the face of this asymmetry, authorities may opt for reducing the number of targets -- restraining liberty. Is there an alternative response to terrorism that prevents a chilling of liberties while still effectively reducing the scale and volume of terrorist attacks?

~~~~

Saturday, February 28, 2015

Kind of a cool idea: a recorder that allows you to pin back later audio onto an earlier part of a recording. You, an interviewer, are recording an interview, and you hear the interviewee say something interesting about XXX. You hit a button on your audio recorder, the interview goes on to YYY and ZZZ, and then you say, "Can you say a little more about XXX?" You hit the button again and the interviewee expands on XXX. You hit another button. On the first button press, a pin is attached at the point where the interviewee mentioned XXX, when you press the button again a thread is attached to the pin with the audio from that point attached. When you press the other button, the thread is cut. The linear track of audio as spoken is preserved the whole way, but now there's another track where you can attach a single excerpt from later parts in the track to earlier parts.

There are, of course, limitations to the method. You can't attach a thread to either YYY or ZZZ with only the two buttons. You may be able to return to XXX depending on how we set the semantics of that first button, but it makes much more sense for that button to do the same thing as before by either pinning to one of the second-track threads in a sort-of skewed tree:

-------------
   `---------
       `-----
          `--
or to re-pin on the base track (p means pin):
--------p----
   `---------
Maybe it would be better to have a three-button system to set multiple pins with the first button, a second button to attach the thread to the earliest pin, and a third button to cut the current pin.

I'm focusing on a simple button system which can be managed more or less without thinking about what thread you're on since I think that, when I talk, my model of the conversation has this sort of folding linearity that matches with this system. Obviously, a full tablet computer with a display of these threads would allow for great flexibility, but wouldn't that display get in the way of engaging with the interviewee, the focus of your activity? There's also much more applicability for the simple thing than the complicated pretty thing that's all about itself and not about the thing that you're doing with the thing. I might permit a dial of sorts that allows you to move between pins more freely.

~~~~

Tuesday, February 10, 2015

Motion prediction

I just thought up this experiment a couple of minutes ago:

I'm a casual juggler and I'm wondering whether, when I juggle, I'm predicting the parabolic path of the balls as they fall through the air or I'm doing something else, like predicting a linear continuation of the balls' motion from any given point. To test this, I would have myself standing or sitting with my head in a fixed position. I would have a machine for throwing balls in a predictable arc (e.g., a batting cage ball delivery system). I would have the balls thrown with a spread of trajectories that land within the range of my arms for catching. I would have a head-mounted camera recording approximately the visuals that I could see. I would have a set of goggles that could obscure my vision after a specific time-delay from launching the balls. Each trial would consist of the throwing machine throwing a ball and myself attempting to catch. To establish a baseline, I would attempt to catch the balls without my vision being obscured at any point in the ball's arc. Then, I would have my vision obscured before the top of the arc, at the top, and after the top until the next trial. I would have an assistant record the trials on which I caught the ball and the ones on which I did not.

In order to reject the theory that I was calculating parabolic arcs, my performance when my vision was obscured would have to be close to as-good-as my performance when it was not. We would still expect that when my vision was blocked earlier, my performance would be worse than later. The camera recordings are to explore whether an alternate strategy, linear extrapolation, could be in effect. For the failed trials, we would predict the linear path of the ball from the time, maybe .1s, before my vision was obscured and see if my hand placement was closer to intersecting that path than the parabolic.

Since I thought of this experiment before consulting any of the literature, I'm going to do a little study. I'm starting with these here:

~~~~

Friday, February 6, 2015

Custom diff formats

I just discovered this gem posted by Github staffer Ben Balter last year.

~~~~

Tuesday, January 20, 2015

Debugging techniques

In our C.S. classes, we were often shown a picture like this:
The memory is allocated from the bottom of the stack and the top of the heap. Now, students had asked the obvious question of "what happens when they meet?" The obvious answer was that there would be an exception of some kind. I don't know if we ever probed into how that exception worked though. It's pretty clear that it couldn't be a segmentation fault. The memory on either side of that gap belongs to the process, so there's no invalid address being accessed.

Recently I read that the collision is handled by marking a memory page set between the stack and heap areas as a guard page. When the page is accessed, this signals an interrupt to the processor similar to what occurs in a page fault and thus allows the operating system to resume control and handle the overflow by, for instance, killing the process. Guard pages can also be used for debugging a process with unknown behavior that is presumed to access a certain portion of memory in a critical part of its operation, and this technique is valuable for software that subverts debugging with soft breakpoints (which temporarily modify program code) by checksumming code-in-execution.

~~~~