Wednesday, December 10, 2014

Owning a Kindle tablet

I recently acquired a Kindle Fire tablet. It's been less than a week that I've had it, but I'm already annoyed by the amount of vendor tie-in that comes with the device. I'll be turning this tablet computer into the kind of device that I like over the next few weeks. I've started by looking over a few videos that advise on "rooting" or "jailbreaking" a Kindle Fire. For now, I'm distrustful of these and I'll be sticking to the use of official Android developer tools and a few others that seem useful.

~~~~

Thursday, December 4, 2014

Lately, I was trying to resolve an issue with an xsl:import statement looking for a document in from the Tomcat servers base directory (/var/lib/tomcat7). Naturally, you have to tell the transformer how to resolve the the URI -- how would it know otherise?
The API provides a way for URIs referenced from within the stylesheet instructions or within the transformation to be resolved by the calling application. This can be done by creating a class that implements the URIResolver interface, with its one method, URIResolver.resolve(java.lang.String, java.lang.String), and use this class to set the URI resolution for the transformation instructions or transformation with TransformerFactory.setURIResolver(javax.xml.transform.URIResolver) or Transformer.setURIResolver(javax.xml.transform.URIResolver). The URIResolver.resolve method takes two String arguments, the URI found in the stylesheet instructions or built as part of the transformation process, and the base URI against which the first argument will be made absolute if the absolute URI is required. The returned Source object must be usable by the transformer, as specified in its implemented features.

~~~~

Thursday, November 27, 2014

Essie Mae Washington-Williams

From Wikipedia (with edits):

Essie Mae Washington-Williams (October 12, 1925 – February 4, 2013) was an American teacher and writer. She is best known as the oldest natural child of Strom Thurmond, Governor of South Carolina and longtime United States Senator, known for his pro-racial segregation policies. Of mixed race, she was born to Carrie Butler, a 16-year-old black girl who worked as a household servant for Thurmond's parents, and Thurmond, then 22 and unmarried. Washington-Williams grew up in the family of one of her mother's sisters, not learning of her biological parents until 1938 when her mother came for a visit and informed Essie Mae she was her mother. She graduated from college, earned a master's degree, married and had a family, and had a 30-year professional career in education.

Washington-Williams did not reveal her biological father's identity until she was 78 years old, after Thurmond's death in 2003. He had paid for her college education, and took an interest in her and her family all his life. In 2004 she joined the Daughters of the American Revolution and United Daughters of the Confederacy through Thurmond's ancestral lines. She encouraged other African Americans to join such lineage societies, to enlarge the histories they represent. In 2005, she published her autobiography, which was nominated for the National Book Award and a Pulitzer Prize.

It's not so easy, I think, to not be acknowledged as your father's offspring for years. I'm rather impressed by Mrs. Washington-Williams take on the historical importance of her heritage and the importance for blacks to join in our nation's complex and sometimes unsavory history.

~~~~

Wednesday, November 19, 2014

I just discovered this site genius.com. Haven't explored it, but it looks nice at least.

~~~~

Tuesday, November 18, 2014

Python3 or Bust

I decided to transition my library, Yarom (Yet another rdf-object mapper), to Python3. I've resisted using Python3 at all to avoid dealing with the transition/rewrite tools (six, 2to3) and the still un-ported packages. What made me reconsider is the unicode support. Although I don't code in a language that requires special characters, I understand that other people do. Making it more comfortable for them to write code is worth the trouble, I think. Besides that, new core Python development should be happening in Python 3, making it more secure to go with the latest version

In the future, I might post about how the Python 2/3 issues, as well as Ubuntu release cycles and the current Haiku OS discussion on a non-alpha release, have affected my thoughts on software versions.
~~~~

Sunday, November 16, 2014

I just remembered that I once chatted regularly (maybe once a week) with a guy from China. It was early in college. He was a pretty cool dude. He even introduced me to his friend once.

I don't remember his name though :(
~~~~

Saturday, November 8, 2014

Night Witches

Here's an article about the "night witches", Soviet bomber pilots from WWII, and a blogpost with links to a little more.
~~~~

Tuesday, November 4, 2014

Sort and replace identifiers in a sentence

I found this post on LinuxQuestions that interested me, so I decided to try my hand at it. What I wrote is slightly more general in that it sorts any identifiers matching a pattern rather than just identifiers with numbers. The program does 3 passes over the sentence: The second substitutes all of the matching identifiers with "{}", a string which can be replaced using Python's string formating function. The first pass extracts the identifiers and sorts them. The third pass is the actual substitution using the string formatter.
import re

word_split_regex = re.compile(r"[\W\s]*")
id_regex = re.compile(r"id\d+")
natsort_regex = re.compile('([0-9]+)')

# from http://stackoverflow.com/questions/4836710/
#  does-python-have-a-built-in-function-for-string-natural-sort#18415320
def natural_sort_key(s):
    return [int(text) if text.isdigit() else text.lower()
            for text in re.split(natsort_regex, s)]
def main(s):
    b = sorted(id_regex.findall(s), key=natural_sort_key)
    x = id_regex.sub("{}", s)
    print x.format(*b)

if __name__ == "__main__":
    import sys
    if len(sys.argv) > 1:
        main(sys.argv[1])

~~~~

Sunday, November 2, 2014

Disable Checked Exceptions?

I recently was frustrated by the necessity of annotating every function in the call chain in order to not handle an exception at the entry point to my code. The module I am working on accepts many different signatures for essentially the same method:

public static void doTransformation(Transformer t, Source in, Result out)
public static void doTransformation(Transformer t, InputStream in, Result out)
public static void doTransformation(Transformer t, InputStream in, OutputStream out)
public static void doTransformation(String t, InputStream in, OutputStream out)
public static void doTransformation(String xslt, String in, Writer out)
public static void doTransformation(String xslt, String in, OutputStream out)

Only the first actually does the transformation, but this can throw an exception from an external library, and all of the others call it directly or indirectly. I didn't want to annotate all of these with a throws declaration because I didn't know if I would have to switch out Transformer for something else or add more such methods in the future.

My first reaction was to leave off and go read a book; so I did that and then went to sleep. This morning I was looking at the code again and I was reminded of the first tool (or maybe second after abstraction) in the programmer's toolbox: indirection. Although a checked exception, if unhandled, can introduce a lot of unnecessary annotations into your code, you aren't obligated to use that exception throughout. All it took was wrapping that exception in my own derived from java's RuntimeException, and now it passes through to a place where I can deal with the exception appropriately.

I should have thought of such a simple solution right away — maybe I'll remember not to be writing code when I'm so tired in the future!

I happened upon this excellent blog post on exceptions in Java. Really, I can only vouch for the headings and the comments, but it's a very good summary.
~~~~

Thursday, October 30, 2014

I was watching this video here and, at the end, the speaker, Ghislain Nkeramugaba, mentioned that there is an unwritten rule that broadband Internet access is built out with road construction. Hearing this makes me think that the countries which are building out their infrastructure for the first time must be at a great advantage to older nations that had to patch centuries old (or older) infrastructure to bring broadband access. Especially in Europe where various buildings and roads may have been there for a millennium or more, bureaucratic restrictions may slow down build-out regardless of industrial sophistication. That certainly is not to say that countries like Rwanda have no areas which are worth protecting; however, I think there is always greater difficulty in tearing up and replacing established infrastructure versus adding something that did not exist before.

What's the upshot of this alleged smaller burden of history? It's really unclear to me as I have no background in the development of infrastructure. I do suspect however, that there are opportunities for innovative plans for building the networks that power developing countries and that these countries will be the laboratories of exciting new Internet technologies.
~~~~

Wednesday, October 29, 2014

I've thought a little about why it is that I have more trouble taking the time to learn some software development tools than others. In one part, I prefer software that is well maintained, well spoken of, and well-used. I am wary of newer tools (pretty much anything that can't be traced to before 2006).

My reason for avoiding newer projects has little to do with how well-made they are. I don't know if they are or not most of the time until I start using them and dig into the code. Instead, it's more that I fear taking pains to learn some framework or tool-chain only to have the usefulness of my learning negated by some other newer system.

The keep-up game never appealed to me. It's why I avoided the technology fetishism that lusts after the newest/fastest/sleekest version of a few-months old product. It's clear that it takes real work to keep abreast of such changes, to understand the strengths and weaknesses of a line of products, to make useful characterizations of a brand. The thing is, I have no head for remembering system specs, to do so would be a non-trivial investment, and that investment would be made moot two or three versions down the line. That's my concern anyway.

So, it's the same with software development tools, especially web frameworks. I would rather learn fundamentals of network programming and HTTP. Once learned, these provide a basis, or so I imagine, for extension in a greater number of directions than if I learned how to make a J2EE web app. Perhaps this is the wrong way of thinking about things. After all, knowing fundamental physics doesn't necessarily make you a good chemist, let alone biologist, although the latter are built up in terms of physics.
~~~~

Friday, September 19, 2014

Have an idea for a flat job-assignment structure for a consultancy or small-jobs company. The company has one corps of engineers who work client cases. Clients work specifically with one of these agent-engineers as long as the relationship lasts and they coordinate feature additions, formulate engineering tasks from customer issues, advocate for the task-engineers (we'll get to them), negotiate the up-front price of the task, and input tasks for completion. A second corps of engineers, when tasks are input, self-assign to the tasks they want to work on. Task-engineers can bring in or agree to work with other engineers and either work out a payment split among themselves or formally agree on how to split it. Task-engineers can also suggest changes to the payments which would encourage them to work on the task.

Naturally, there would be some tasks that virtually no one wants to work on because they're mundane, no one has the skills, or no one likes the company, or any number of reasons. For these tasks, there should be a coercive rule that, for instance, requires junior engineers to take tasks no one else acts on. Alternatively, there could be something like a points system. Really guaranteeing liveness of every task is the hardest thing to manage when you have self-election. It could be necessary to remove the guarantee of completion -- which would be awful -- or to outsource the tasks or to have such a large pool of workers that tasks rarely die and can be handled on a case-by-case basis.

~~~~

Tuesday, September 9, 2014

Margaret Sanger

Margaret Sanger is credited with founding modern birth control tools and education for women in the United States. She began her campaign for women's health and welfare in the 1900s during the same time as the Comstock Laws (effective from the 1870s), which restricted the transmission of contraceptive information on the grounds that the materials were obscene.

~~~~

Wednesday, August 27, 2014

For some reason, the Play Framework docs give you an overly verbose syntax for setting up a fake application on each test:

    @Test
    public void findById() {
        running(fakeApplication(inMemoryDatabase("test")), new Runnable() {
            public void run() {
                Computer macintosh = Computer.findById(21l);
                assertThat(macintosh.name).isEqualTo("Macintosh");
                assertThat(formatted(macintosh.introduced)).isEqualTo("1984-01-24");
            }
        });
    }
It's much cleaner to do that with with the Helpers class and JUnit setup and teardowns (i.e., @Before and @After)

public class ApplicationTest {
    private FakeApplication fa;

    private FakeApplication provideFakeApplication()
    {
        return Helpers.fakeApplication(Helpers.inMemoryDatabase());
    }

    @Before
    public void startapp()
    {
        fa = provideFakeApplication();
        Helpers.start(fa);
    }

    @After
    public void stopapp()
    {
        Helpers.stop(fa);
    }

    @Test
    public void findByID() {
        /* Runs in a FakeApplication context */
        Computer macintosh = Computer.findById(21l);
        assertThat(macintosh.name).isEqualTo("Macintosh");
        assertThat(formatted(macintosh.introduced)).isEqualTo("1984-01-24");
    }
...
}
This way you take advantage of well-understood contextualization through setup and teardown methods while keeping the test itself focused on what it is testing. Besides, using the wrapper can't even be justified for one-off tests except for the fact that you might forget to call Helpers.stop.

~~~~

Monday, August 25, 2014

Despite all of my posts discounting my TagFS (there are so many). I've started dog-fooding it, using a mounted TagFS for articles and things I've downloaded while doing research. It may be that standard Unix commands and file browsers are sufficient interfaces for the common case of tagging files. I will post back eventually if I make changes based on this usage.

~~~~

Saturday, July 19, 2014

What joy when the insouciant
armadillo glances at us and doesn't
quicken his trotting
across the track into the palm brush.
What is this joy? That no animal
falters, but knows what it must do
--Denise Levertov

~~~~

Tuesday, July 15, 2014

Such acts targeting places of worship are unacceptable. They are extremely grave et will always find a determined response from the authorities,” he said in a statement. He said France ''will never tolerate the import of the Israeli-Palestinian conflict on French soil.'

~~~~

Thursday, July 10, 2014

It's hard to explain the phrase, "There's no such thing as consciousness". I used to say it, but I don't know if it's as productive as I used to think it is. When I say that, my intention is to divorce the idea of consciousness from its myriad associations--the irremovable notions that are associated with the word "consciousness". I think we can get to the place where the phenomenon itself is usable--where it becomes a like a pencil, perhaps. That is to say, where something like a human can affect the world through an exercise like writing to re-inscribe on some little part of the world (like a piece of paper) some little part of its little world (its mind).

These analogies and analogies...

~~~~

Tuesday, July 1, 2014

What is the BRAIN initiative? The Obama administration's moon shot?

~~~~

Sunday, June 29, 2014

Abraham and Christianity

At this point, I understand that Abraham, the 'father of the Jews' was used by early Christian writers to craft the nascent Christian community. Paul referenced God acknowledging Abraham's faithfulness prior to his circumcision both in 15:6 when "Abram believed the LORD, and [God] credited it to him as righteousness" and earlier in Genesis 12:2-3 when God makes his first promise, and Paul, because these acknowledgements were made before Abraham is commanded to circumcise himself, claims that the promises of the covenant are open to people on the basis of faith. Barnabas uses typological arguments, drawing connections between stories in the old testament (some of which are related to Abraham) and Christ. These arguments supposedly show that the old testament, the 'type', is fulfilled and superseded by the 'anti-type' in Christ's life.

The main issue I have is that I can't understand why these writers would want to exclude the Jews (of whom many of them were a part) from the covenantal promises -- in some cases excluding them from the possibility of inclusion entirely.

~~~~

Thursday, May 29, 2014

From some random notes:

Eventually we will be living in a world of disposable intelligent beings that can be copied and deleted, more-or-less, at will. A thinking machine could be subjected to extreme horrors and summarily deleted and destroyed without consequence for any that didn't know it. I'm not even sure if that's terrible or not. It's basically the situation in the Matrix films both before the rebellion of the machines and after. The machines, after the creation of AI, could be made and recycled on assembly lines and the machines basically did the same to humans after the rebellion. The key thing about these films isn't the man versus machine epic that gets easy play in Hollywood. It's the fact that what the machines did to the humans was no worse than what the humans did to the machines: it was every bit as horrific.
~~~~

Sunday, May 25, 2014

I'm finally getting geppetto installed so that I have something to work with on this project. I understand that it may be unreasonable to have every developer running the full project on his machine, but I need a way to get a feel for this project rather than just being the data guy. It's a fucking simulation project and I've run exactly zero simulations from the project itself.

So far things are going well. Geppetto core built without a hitch.
~~~~

Friday, May 23, 2014

I'm amazed by the depth in Final Fantasy 8. They put so much effort into details some players never even see! Like in Dollet, the bar owner is an avid card player. If you beat him he takes you to his room where there are some books. In one of the piles you can read his journals. The first describes how he meets his wife by playing cards with her (and losing). The second describes his daughter and how he loses his wife when she saves her daughter from drowning. This is way too intense for a video game side story.

~~~~

Wednesday, May 21, 2014

The First Borg

I'd like to read the history of the Borg. Who were the first Borg? Before the implants, did they have the same ethos of cultural and technological assimilation? Did the Borg ever value independent expression on the level of single organisms? Did any one group design the Borg hive mind system or did it evolve over time?

The Borg were a race considered to be "evil" in the Star Trek universe largely because they did not value autonomy individual sentient beings. Naturally, the species, like humans, which were predisposed to act as individuals would fear and oppose the Borg. I can't recall what else the Borg did which would merit being called evil.

~~~~

Tuesday, May 20, 2014

Women in science

In the past I've read that some people, mostly men, think women are not cut out to make discoveries in scientific fields. I decided to make a list of female scientists who made major contributions to their fields as determined by their peers. I'm making this list not because there aren't any such lists but because these are the ones I've actually read about (and thus, should be able to recall when I have reason to).
Julia Platt - an embryologist credited with discovering that neural crest cells formed the jaw cartilage and tooth dentine of the salamander.
Alicia Boole Stott - introduced the term 'polytope' to English mathematicians.
This list will grow as I read more.
EDIT: I've decided that future additions will be made as individual blog posts so it's easier to follow rather than editing this single post.
~~~~

Sunday, May 18, 2014

From a conversation with my aunt:

My acceptance of some scientific theories is based on my assumption that most researchers are, at least, not likely to lie about their results. I also hope that not a few act with consideration of the ethics of reporting research. Then, it seems unlikely, from my perspective, that so many scientists would not agree on something if it weren't independently observable, testable, verifiable. I also assume that those people who make up the majority of their fields are actual scientists when I accept a theory.

I must emphasize, however, that like in the classic quote where Sherlock Holmes declares that he couldn't care less if the earth revolved around the sun, many of the theories I encounter can be true or false and it wouldn't have any bearing on my day-to-day. In particular, if evolution isn't the explanation for the diversity of earth life, it doesn't hurt me now to assume that it is the explanation, and I know that taking evolution as a base permits some useful science to be done. Other theories, because they are theories, can be tested by me with the appropriate equipment to either confirm or disconfirm the theory with sufficient rigor to satisfy my own sense of what is true about the world. That is what separates the acceptance of a theory from belief. Beliefs are those assumptions which cannot be tested, but a theory offers a means, even if that offer is never taken advantage of, to base your assumptions on ones that are more sure.

~~~~

Monday, April 7, 2014

A nice quote

"It is not birth, marriage, or death, but gastrulation, which is truly the most important time in your life." - Lewis Wolpert, pioneering developmental biologist
~~~~

Wednesday, April 2, 2014

To what extent do the assumptions we make, collectively and independently, influence the models we construct in the world? In other words, how strong are our assumptions? Also, what are they?

This is something I ask generally, about all scientific fields. Most interestingly, what are the assumptions made by physicists which might be hindering the achievement of things like teleportation of matter and faster than light travel (if these are even possible).

If the assumptions are too great, how could we break out of them? What would we find?
~~~~

Thursday, March 20, 2014

I have this idea about life systems. It's that at least part of their function is to take states in the present, past, and future and map to states in other time frames. How can I make this notion falsifiable/testable?
~~~~

Thursday, March 13, 2014

As far as TagFS goes (and all of the projects in a similar vein -- dhtfs, TaggedFS, all of the other "TagFS"), the fundamental problem is and always has been interface. I effectively use tagging on my Firefox bookmarks to annotate and retrieve data that interests me. The great advantage provided by this interface is that it makes tagging like second nature, so it's unobtrusive and helpful. My approach with TagFS would never have worked because it's cumbersome to set up and use the tags in a heterogeneous system and more importantly, there was no way to declare tags at file creation points: those critical moments in time when you still give a damn about the files, but haven't yet lost them. Harnessing the motivation surrounding the initial data entry and aligning annotations with the thoughts that first compelled the user to save the data -- what I think of as the "retrieval memory" -- must be the key concern for a successful interface and that's what Firefox bookmark tagging gets right.

How can we bring such an interface to the desktop? My first suggestion would be a modification of the file/save dialog in most applications. This is usually tied into the interface toolkit (e.g., GTK) in a way that it should be possible to modify it directly and have that modification spread to all applications that use it. There are, however, other ways to create and save files, such as through the command line or through applications that do not use the standard interface toolkit. One other suggestion is to create a virtual file system layer that presents a secondary prompt for tagging on file creation. This, of course, has its own problems in that not every file created in the user's name needs tagging (e.g., applications frequently create cache and config files). We can get around that limitation by demanding that the user selects directories to be managed. There are still no guarantees that a third-party program won't attempt creation of files in this directory, but adding to the prompt an option to never tag a certain kind of file may solve this problem.
~~~~

Wednesday, March 12, 2014

A useful site!

I often have trouble with pronouncing words I've read.

I've found this wonderful site (and bookmarked it this time!) to help me out: HowJSay is a site which has a list of words spoken by (apparently) 2 or 3 British men. I've found a few pronunciations that have differing pronunciations in America from Britain, which the recordings speak also.
~~~~

Thursday, March 6, 2014

Why, if cortical areas were largely the same in their organization, would there be visual differences in the various regions of the cerebral cortex?
~~~~

Monday, March 3, 2014

LTP and learning

How could we test the relationship between long term potentiation and learning?

That's hard. Anyway. How could we test if increasing the surface area of a
terminal bouton results in increasing post-synaptic response? I guess first
it's necessary to affirm that it isn't a foregone conclusion that this
mechanism already operates. According to Dr P, the increase in the area of
a terminal bouton results in increase in the number of docking sites (mediated
by the presence of precursors for the creation of synaptotagimin and SNARE
molecules I suppose). We also know that under low-Ca++ concentrations, the
primary means of exocytosis into the synapse is through complete fusion and
collapse of the vesicles. However, the reason that under these conditions
(which are typical for at least some of the neurons) the result isn't to have
the boutons grow unbounded is that the membranes are also recycled into new
vesicles through the process of endocytosis
[https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2343565/ and others].

So anyway, first stop endocytosis at the axon terminal without hindering the
process of exocytosis or creation of new docking sites, or anything really
having to do with the movement of vesicles (if possible...maybe some virus?).
Second, monitor the extracellular concentration of neurotransmitter following
either the introduction of Ca++ into the terminal (this might be achieved also
with an AP, but that may have additional consequences not considered).
~~~~

Thursday, February 27, 2014

Sometimes I'm afraid that we forget what it was like to not organize things into classes, subclasses, types, and objects and that we got along fine like that. We need to be able to make use of these informal schemata which make use of other kinds of relationships between data. Although things might be reducible to labeled, directed graphs, we should definitely not restrict creation to that level.

We need a means of expression for the otherwise inexpressible. Those things which are only the immediate, fleeting concordance of sensory information and gestaltic, nameless unity; those things I want to give presence and meaning. These are things which we give names and construe in our own lives, but they are not the things themselves. The feelings of eyes all around looking on in distrust, hate, or condescension cannot be understood by the word "racism". A need to be a man while living in a woman's body is not translatable into any language that we can all speak. The well-fed business man does not comprehend the poor man's escape in heroin addiction.

Concepts like these matter. To understand them is to know the motivations for action and to walk in the other person's shoes. Compassion through shared experience is as old as story-telling, but the forces which enact it are too sparse and inconstant. Failure to understand the other man weakens us. It turns us to hate, confusion, secrecy, fear. Men learned to conquer parts of this in prehistory so that we could live in villages and not kill one another over minor faults. I think we need to learn again how to live with humanity on a global scale.

~~~~

Thursday, February 20, 2014

A Kiwi artist, Henry Christian Slane. I like his figure work.
~~~~

Wednesday, February 19, 2014

disarming malware

The problem with bad software isn't that it gets made. It will get made, and no matter how easy we make the tools for specification and testing of software, there will be people who use unsafe production methods to make unreliable and exploitable software. However, I would hope that a secure system can exist. In the same vein as modern cryptography, I acknowledge that perfect security in the general case is unlikely, but suggest that strong security-- in some specific, provable sense-- can exist in the software systems that humans subject themselves to.

My main goals:

  1. make the economic benefits of releasing exploitable software less than the costs of producing good software.
  2. minimize the effects of unavoidable exploits, eliminate avoidable exploits, provide tools for realizing these goals in software production systems.
  3. make more accessible tools which can formally exclude the possibility of certain classes of exploits existing.
  4. spread more knowledge about safe software practices to those who make software

~~~~

Treating software as biology

Not techinical at all, but I was just thinking about how sometimes software does things that don't make sense to us and which have no easy solution from a first guess at what's going on. It would be useful to have a integrative view of piece of running software. What I mean is, we want to be able to see what the features are of a system over time, how they change, but we want to see all of these things at the same time. That, I really hope, isn't too hard. From there I would like to see how we can take the state of the program and relate it to the activity of the system it works in.

This idea isn't entirely my own. There was a paper I read a while back about treating a peice of malware as a virus which has certain system call profile. For a single architecture and operating system, this should be statistically stable across machines and serve as a sort of marker. That idea comes, very loosely I'm afraid, from the immune system, which can identify antigens by interacting with them.

~~~~

Wednesday, February 5, 2014

A video of Michael Rabin discussing second-price auctions:
https://video.ias.edu/csdm/1213/0429-MichaelRabin

Concerns collusion resistance and zero-knowledge. The bidders want to hide their bids from everyone auctioneer (evaluator-prover), but they also want to know that their bids were calculated correctly. There is danger of collusion so that bidders can get a lower price (to the detriment of the auctioneer).

More from Micali (collaborated with Rabin on this topic): http://cacm.acm.org/magazines/2014/2/171688-cryptography-miracles-secure-auctions-matching-problem-verification/fulltext

~~~~

Tuesday, February 4, 2014

Something that would be useful: a running list of abbreviations and definitions on the side of a document reader. The listing depends on which abbreviations had been used up to that point in the paper -- so they appear and disappear as you go down and up the document. The definitions can be attached to the document as metadata, and that data can be modified by the user through the reader to include their own definitions.

A companion to this is a tool which will give abbreviations, synonyms, and definitions for certain terms on mouse-over.

~~~~

Friday, January 24, 2014

It is possible to change the patterns of thought and shift into "flow" intentionally and without rituals, etc. The difficulty is the delay between the decision and the onset of the mental state. Patience and experience bridge the gap.

~~~~

Monday, January 13, 2014

Should all of the constraints use sets?

I'm considering making all constraints use sets. This change would be contained to the connector class since every value that didn't use sets already could have all of its values implicitly wrapped into Singletons and then unwrapped on getValue calls. The problem there is, of course, that then the constraints which do make use of sets will have to form their own singletons whenever they request a value. Obviously, this is annoying. Part of the usage of sets that I'm thinking of is with inequalities. I want to be able to specify that a whole range of values could satisfy an inequality where previously, I had to settle for just not setting anything on that connector. In this context, singleton values don't make sense because they need to be dereferenced to the exact value anyway.

The main thing is that when we set a single value on top of a set, we have to test membership, but when we set another set on top of a set, we have to test the intersection is non-null (and possibly another value).
This gives something like:

(define (consistent? newval v)
...
(match (list newval v)
       [`(,(? Set? x) ,(? Set? y))
        (intersect x y)]
       [(or `(,(? Set? x) ,(? (negate Set?) y))
            `(,(? (negate Set?) y) ,(? Set? x)))
        (member? x y)]) ; member? returns y or (gensym)
...)
to get the value to set. Note that this also changes the meaning of consistent? to what might be called makeConsistent. An operator that takes two values of some type and returns an value that is consistent with both of them. This can be achieved by way of generics. The various set functions can be used with any values that they need and Connector only needs to know about the generic function makeConsistent. This allows for us to make any number of types and give them a their own definition of makeConsistent, but also to not require any changes to Connector -- the two concerns are completely separated. Going back to the original point of this post, we still have to change the use of consistent? Connector to makeConsistent and change the semantics of the constraint solver such that any constraint can set values on the connector (this was the case before), but may not be the setter anymore since the value that is finally set doesn't belong to any of them. However, assuming the makeConsistent functions are correctly implemented, we have to notify neither the setter of the old value nor the setter of the new one since the result will be consistent with them both. This may make the logic slightly more complicated, but the advantage of allowing extension of types by users outweighs that concern.

~~~~

Wednesday, January 1, 2014

Rule matching with the Observer pattern

A rule matching algorithm has an intuitive implementation with the use of the observer pattern. Model state data as observables and productions as observers of state objects. Updates to the state objects trigger notices to productions. Notices to productions can include the state object which sent the notice. Productions request to add themselves to a set of rules-to-execute. Conflict resolution between productions can be done on addition to the set or in a separate phase.

Running time estimations

Updates to N state objects with at most M productions depending on any rule takes O(N*M) time if the productions know which rule fired and productions get notified independently for each state object of all changes to their preconditions. Deleting a production takes O(B) time to unregister the production from each of B data objects. Deleting N data objects with at most M dependents depending on at most B data objects takes O(N*M*B) time if the objects must unregister themselves when they can't be satisfied (presumably they can't if all precondition variables must be present to fire), or O(N*M) otherwise.

This algorithm improves over a naive implementation of searching through the whole list of productions on each update, but doesn't perform as well as Rete matching on updates.

~~~~