1.  


  2. Quirky Issues

    I just submitted my second idea to the invention service Quirky and it was not without difficulty.  I’d like to call attention to a few UX issues the site has that could easily be corrected to improve the experience.  

    Image Upload

    The image upload process at Quirky is pretty bad, but it really doesn’t have to be.  They seem to be using dropzone.js or something similar to handle the upload interface, and it works great.  You can drag and drop your files, and you immediately get a thumbnail in the browser.  Unfortunately, when you hit the submit button, your idea page is filled with broken images.  

    image

    When this happened to me, I immediately deleted my submission because I didn’t want my idea to look broken.  Essentially, broken images are never good, but quirky’s flow actually makes it worse.  The main ‘Invent’ page of Quirky shows the most recent ideas, but ideas come in quickly so your idea will only be there for a few minutes.  Since this is seemingly the only time your submission will be visible to other casual users it is an extremely important time to shine.  If my idea has broken images, it looses credibiity in that incredibly important time.  

    Not cool guys.  

    After contacting support and being abandoned for a few days, I tried again with mock data and images.  Eventually I discovered is that a few minutes after submission, the images I uploaded suddenly appear, fully functional and unbroken.  What this means is that Quirky is doing some asynchronous processing of the images between upload and availability but has not taken the time to provide a ‘Processing…’ message to viewers.  It’s like the engineers are pretending that the processing is instantaneous and hoping no one will notice, when in fact it takes several minutes.  

    It’s an easy problem to fix too.  All Quirky’s engineers need to do is write a quick conditional that checks to see if the image is done processing; if it’s not, throw up an image that contains a processing indicator.  That’s it.  As a stopgap measure, they could at least publish somewhere that images may take a few minutes to show up.  

    Social Sharing

    image


    When you share an idea link on facebook or twitter, this boring Quirky symbol comes up instead of the actual image uploaded with the idea.  This makes no sense whatsoever.  The image is gray, neutral, and always the same: literally everything you want to avoid if you are trying to engage social viewers.  Additionally, it confuses my friends to see me posting about my idea but have a totally unrelated picture of a lightbulb come up.  Quirky wants it’s users to bring outsiders in to vote, so why have they done this?  I really don’t know.  They appear to have implemented the og:image tag but it’s not working.  

    Conclusion

    These two issues, while seemingly minor have made my Quirky experience less than stellar, and given how little work it would take, it’d be nice to see them get fixed up.  

     

  3. My girlfriend hates Valentine’s Day, but I like it and we’re both tremendous nerds so I started a tradition: every day leading up to Feb 14 I created a custom meme of one of our favorite properties.  I just started working on this year’s so I decided to share the load from 2013.  

    Enjoy!

    UPDATE: 

    Yes, they are supposed to be terrible.

     


  4. Python-Carteblanche

    I finally entered the world of open-source software!  Last week I published a tiny package called Carteblanche to integrate and simplify the process of generating conditional-permission-based menus.  

    This project is largely an experimental platform for me to take aim at some of the more ideological aspects of REST.  Specifically, I find that verbs are actually a very natural way for engineers to build things in the way that their users think of them.  Much more to come on this when I publish the 0.5.0 release next week.  

    Till then, please check it out here: https://github.com/neuman/python-carteblanche 

    Feedback welcome here or on github!

     


  5. Help Me Make Amazon’s Android Streaming Embargo Painful For Them

    I’ve been pretty irritated by amazon’s embargo of prime streaming on Android devices.  We know they already have the android app because kindle IS an android, and they even brought it to IOS which is just salt in the wound.  What this says to me is that they are going out of their way to screw us out of a service we pay just as much for in hopes of forcing us to buy kindles.  This doesn’t even really make sense because there’s no kindle phone so it literally isn’t even an option for many of us. 

    I sent this message to amazon prime streaming support, but with the [ blanks ] filled in obviously.  

    I would like a discount on my prime service due to the lack of support for mobile streaming on my device. I have a [ Your device ] and I cannot stream video. The advertised service for amazon prime streaming video promises mobile streaming but I am unable to do so due to the Amazon embargo on most of the world’s mobile devices. It seems that I am being charged the same amount as people who can do mobile streaming (kindles, IOS), but I’m not getting the same service. Please reduce my monthly fee accordingly. 

    Thank you,
    —[ YOUR NAME ]
     
    What I got back surprised me!
     
    Hello, 

    I’m sorry for your disappointment about not being able to stream Amazon Instant Videos on Android mobile devices. 

    Currently, Amazon does not support Amazon/Prime Instant Videos on Android devices. 

    We are working to expand our Amazon Instant Video service to a broader selection of devices in the future. 

    When Amazon Instant Video is made available on a new device, it appears on our Compatible Devices page: 

    http://www.amazon.com/gp/video/ontv/devices 

    We’ve received many requests from the customers to release an Android app for Instant Video streaming and our development team is working in this regard and we hope to make Amazon Instant Video available on Android devices in the near future. 

    I’ve also forwarded your feedback to our Amazon Instant Video Development team. Customer feedback like yours is very important in helping us continue to improve the experience of using our digital video service. It is an important part of upcoming developments in our Amazon Instant Videos service. I appreciate that you wrote about this so that I can point out the increasing demand for it. As this involves many teams and individuals, I’m unable to predict the current time-line. 

    Meanwhile, as an exception and as a compensation for the inconvenience caused, I’ve issued a 25% refund i.e. $20 on your Prime membership. Refunds typically process within 2-3 business days and appear as a credit on your statement. You’ll receive an automatic confirmation e-mail when the refund is processed. 

    I hope this helps. Thank you for choosing Amazon.com.
    Best regards,
    Prateeth

    Pretty cool!  This is not the first time I’ve asked about streaming on android, but it was the first time I asked for recompense and it worked.  
    Everyone, please send Amazon this message, and share!
    UPDATE: 
    Here’s the link to send the message.  You must be logged in to your account to use it.  

    https://www.amazon.com/gp/help/customer/contact-us
    UPDATE: 
    Thank you everyone for the overwhelming response!  I’m very interested to hear about what people are hearing back from amazon.  Please tweet at me @eric_neuman  or with the hashtag #AmazonAndroidEmbargo to let me know.  Keep it up everyone, and we just may get our app!
     


  6. Sneakopump : Self-Lacing Shoe

    It’s almost 2015, where are our futuristic self lacing shoes?
    image

    The Problem

    Humans have struggled with footwear comfort since the first person stuffed some leaves inside a piece of leather and strapped it to their feet. Even today, shoes need to be fitted, tied, tightened, and they still come undone at the worst moments.

    Wouldn’t it be great if there was a sneaker that kept itself perfectly adjusted throughout the day?

    The Solution

    My idea is to put a pump in the heal of a shoe that pushes air into a series of artificial muscles inside the shoe every time you take a step. Rather than cocooning your foot inside sweaty, leak-prone air bladders like the Reebok Pump, this shoe cinches tight like a laced shoe providing a traditional sneaker feel.

    The cinch would be provided by Pneumatic Artificial Muscles which have their pressure maintained by an adjustable pressure valve. Every time the wearer takes a step, the pump pushes air into the valve which lets the right amount escape to keep the cinching factor constant.

    The end result is a shoe that tightens around your foot after a few steps and stays perfectly snug all day long.

    Check out the design over at quirky and vote!

     

  7. Here’s my talk from SciPy 2013 on the Roadmap to a Sentience Stack.  It’s basically me trying to convince a room full of machine learning and AI experts that a project like this is feasible and relevant and important.  

     


  8.  


  9. Roadmap To A Sentience Stack

    Artificial intelligence sounds awesome, doesn’t it?  Machines that can solve problems on their own or answer questions for us would represent an enormous leap forward for all of mankind.  The problem is, artificial intelligence as it exists today is not very flexible.  Today, AIs are trading stocks, beating you at video games, filtering spam out of your email and handling a myriad of other tasks, each one specialized for its particular chore.  Therein lies the rub, each AI needs to be custom built by a programmer specializing in artificial intelligence or machine learning.  In other words, it is currently possible to build machines that learn, but these machines can not learn how to learn.  If we want machines that can truly solve problems, answer questions on their own, and eventually grow to be sentient minds, we need to overcome that obstacle.  

    The “Do Anything Machine” is the first component in the theoretical Sentience Stack, an open source stack of software that when put together can be configured to learn to be a sentient mind.  This approach is inspired by the LAMP stack, a collection of disparate decoupled open-source components (Linux, Apache, MySQL and PHP) that are commonly used together to make it easy to create websites.  It also made it possible for individual components to be swapped out or optimized for a given project allowing the needs of individual projects to push the boundaries as needed. All of these things helped to enable the explosion of growth that created the internet as we know it, and enable it to continue improving.  These advantages could prove to be equally explosive to the world of AI.  There are so many cool robots being made right now, and they deserve to have equally cool brains.  Also, solving the world’s problems and all that, hopefully.  This project is obviously ambitious, but might be possible, and that seems like good enough reason to give it a try.  

    Keep in mind, because each component is dependent on the combination of previous components, the descriptions get increasingly vague further down the list.  

    1. Sentience Stack

    2. Do Anything Machine

    3. Solves any solvable problem given enough resources

    Networked Layer

    1. Compares notes with other stacks

    Self Improvement Engine

    1. Uses the Do Anything Machine to improve components of the stack itself

    Problem Recognizer

    1. Sorts through data to figure out what needs to be figured out

    Creativity Jiggler

    1. Gets the stack unstuck through randomness and rule breaking

    Motivation Complex

    1. Hardwired assumptions and mechanisms for evolving complex motivation and behaviors, potential location for personality to develop.

    Language Learner/User

    1. Uses all layers of the stack to learn how to communicate

       

    Meta-Learning

    People seem to be hardwired to learn how to learn from birth, through a process closely related to schematization called inductive transfer.  When a child first learns how to catch a ball, let’s say a baseball, it’s difficult.  Often there are a lot of dropped throws and frustration, with the occasional incident of the child getting beaned, being a kid is tough.  But gradually, through a combination of sheer repetition and exposure, the child learns how to catch the ball and that first catch is a magical one.  Amazingly, if this child wants to learn how to catch a football, which is a distinctly different skill, the learning process is greatly shortened.  The child can learn how to catch the football faster because the hard part of learning to catch the baseball was figuring out how to do that learning: learning how to learn to catch a ball.  Because that mechanism was already in place by the time football season rolled around, the child was able to apply the same pattern to the oblong pigskin and learn to catch it much faster.  

    So machines need to be able to learn how to learn (meta-learn), but how can we imbue them with such a fundamentally brainy feature?  Many researchers have attempted to create a universally intelligent machine by duplicating the physical mechanisms of the brain.  This approach may eventually prevail, but for the moment our computational resources are nowhere near vast enough for the task.  Much like an airplane wing only resembles that of a bird, our AI should be inspired by the brain, but built in a manner that fits naturally into the way machines work.  As mentioned before, an individual AI expert may be able to custom make an AI to solve any given problem, so why not just automate that process?  The question becomes, how can an AI that tailors AIs be built?  

    The Do Anything Machine

    There are hundreds of existing AI algorithms that are each good and bad at solving specific types of problems.  When a programmer sits down to solve a problem using AI they follow a fairly standard pattern.  

    1. Compare the problem to others that have been solved using AI in the past.

    2. Pick the one that seems the most similar.  

    3. Apply the same algorithm and configuration to the new problem.

    4. Examine the results:

    5. If they are good enough, mission accomplished.

    6. If they aren’t good enough, tweak the configuration and try again.

    7. If they are really bad, or if the configuration has already been tweaked many times, start again at step 2 with the next most similar.

    This process can only work for problems that actually have solutions, which are referred to as tractable in computer science.  Automating this process implies the necessity for some sort of universal problem-statement-language, a toolkit of interchangeable AI algorithms, a problem-statement-similarity classifier, and a data-store to serve as the memory.  AI algorithm toolkits already exist, and virtually any database could serve as memory, but the universal problem-statement-language and its matching similarity classifier are tricky.  In fact, these two components may be the most significant obstacles to the writing of this theoretical program.  

    Any problem that can be modeled can be expressed via programming language.  As such, it seems logical that the universal problem-statement-language be constructed of code itself, specifically as a class that can be subclassed and then fed into the machine.  Essentially, this is expressing a question to a computer in a language it can already process even if it cannot comprehend yet.  

    Problem Statement Class

    1. Functions

    2. Numerical Serializer

    3. Converts the model state into a numerical representation

    4. Doesn’t really matter how as long as it’s consistent

    5. Must be reversible

    Validation

    1. boolean

    2. checks to see if possible solutions meet stated criteria

    Fitness

    1. rates possible solutions quality on a scale from 0 to 1

    Randomizer

    1. generates a random state of the model

    Sequencer

    1. optional but recommended

    2. generates the appropriate state from the sequence of all possible states

    Meta

    1. Plain English (or whatever) description of the problem.  

    This class is intended to be the interface between the Do Anything Machine and whatever other programs are needed to accomplish these functions. For example, if we were training a Do Anything Machine to build a bridge, we would likely use some sort of physics libraries that have built-in representation classes of their own as the model, and then write a custom serializer.  In this example, the validation function would likely deserialize a model from Do Anything Machine output and use the physics libraries to see if the bridge meets some pre-set requirements, like using a finite quantity of materials to build, being able to support a certain load, surviving earthquakes of a certain size, etc.  The fitness function would likely be a simple tally of how many of the sub-tests involved in validation pass (so if it survives earthquakes, but takes tons of materials, that’s a lower score).  

    In the early stages of this project, when only the Do Anything Machine exists or is being created, these problem classes would need to be coded by hand.  The hope is, however, that eventually a Problem Recognizer can be created that is able to generate these on its own.


    Operation

    Solving a problem you know has a solution is like cracking a combination lock.  A specific series of steps will solve the problem, and can be deduced eventually by trying every possibility, but that would take a long time.  Instead, it makes more sense to use specialized techniques to get right to the answer faster, like listening to the pins as the tumbler turns.  The trick is knowing the right technique for the particular lock, and that is what the Do Anything Machine is designed to learn.  

    The first time a problem is posed to the Do Anything Machine, it has no memory, and therefore no context to guide it to an efficient use of resources, so it just tries everything.  The full combination of algorithms and configurations are tried, and their results stored in memory.  The second time, it has context, but not enough to make comparisons, so it repeats the process of trying every combination.  The third time, it is able to compare the new problem to both previous and so it attempts to solve the new problem with whichever combination of algorithm and configuration yielded the best solution for the most similar of the previous problems.  If that combination does not yield a solution, it proceeds to the next best configuration, falling back on brute force dice rolling if all else fails.  Every time a new problem is added, the Do Anything Machine’s context for new problems expands, and thus learning occurs.  

    A Slider for the World’s Problems

    There’s an adage in engineering:

    Quality, speed, money.  Pick any two.  

    The idea behind this being that you can only engineer something of high quality quickly if it’s costly, or of high quality cheaply if you can do it slowly, or quickly and cheaply if it’s ok that it is going to suck. Unfortunately, the Do Anything Machine is just as bound by this sacred triangle as human engineers are, but we can turn it on its head.  Quality is fixed by the problem statement validation function, so that leaves only time and money.  By definition the Do Anything Machine solves any tractable problem given enough resources, but specifically CPU time.  Cloud computing has recently become very easy to use and very cheap, so the only limit on CPU time is how much money is available.  More money means a faster solution; more time means a cheaper one, especially if you start to consider a hybrid approach that uses desktops and cloud instances.  

    The Path To Intelligence

    Human children typically don’t develop the ability to inductively reason until about age seven, and that’s with their brains absorbing information and being bombarded by simple problems every day.  Hopefully, it will take less input to get a Do Anything Machine to begin to act intelligently since induction is hardcoded into the basic algorithm and logic is fundamental to rather than learned by machines.  However, it may still take a huge number of problems for the Do Anything Machine to learn to beat random chance.  

    Like a child, a stack will not converge on intelligence without proper nurturing.  The first few problems fed into the machine will have enormous import on the way it understands future problems and, because we’re building a problem-solving-based intelligence, the way it understands all things.  Better start reading those baby books now.  

    Networked Layer

    We’ve established at this point that the Do Anything Machine can solve any tractable problem given sufficient resources.  We also know that the more problems each machine sees the smarter it gets, and it does that by learning to compare problems.  

    So here’s where the real fun starts.  If the machines are networked, they can compare notes with each other to reduce the resources needed by everyone.  

    For instance, if a machine belonging to a plumber in Montana is having trouble with a particular question, it may run a search, connect to a list of machines that are knowledgeable about the keywords ‘drip system,’ ‘garden,’ and ‘timing,’ and ask them if they’ve seen anything like it.

    A machine in Africa answers with some helpful examples, problems it has already solved and how.  One example even includes a whole new algorithm that was generated by using code evolution.  

    The plumber’s machine downloads the examples and uses them as a starting point for its own exploration, shortening the time to find a solution.  Allowing the Do Anything Machines to connect makes them all smarter faster.  

    Self Improvement Engine

    Determining the similarity of any two given problems is in itself an extremely difficult problem, one that has many possible approaches and great potential for optimization.  I propose that those optimizations be saved for a later time in favor of an approach that is extremely simple to implement for the time being.  Theoretically, the similarity determiner barely has to work in order for the Do Anything Machine to begin converging on intelligence, and then that’s when it gets really interesting and new.  If the Do Anything Machine can at this point solve any tractable problem, and determining similarity of problems is a problem in and of itself, it should be possible to ask the Do Anything Machine, “How do we make a better similarity determiner?”  This process could theoretically be run on any of the components of the program.  

    The ability to improve its own components in this manner is a perfect example of how the stack is inspired by but not duplicating nature.  Biological entities gradually adapt themselves for their environments through evolution, a slow process that uses huge amounts of natural resources for incremental progress across an entire species.  The self improvement engine utilizes different means, but the ends are the same: exponential improvement.  The first fish to walk up onto land secured an enormous advantage that allowed it to reproduce many times more than its sea-bound competition.  More offspring means more chances for evolutionary advantage which yields better offspring resulting in even more offspring; an exponential curve.  With the Sentience Stack, improved components mean more efficient processing which yields better problem solving resulting in better component improvements.  This process may be tied to the technological singularity, if we’re lucky.  

    Creativity Jiggler

    When people are trying to solve a problem and find themselves stuck, or bored, they get creative.  For example, a person designing a bridge may have a design based on everything he’s ever seen but simulations show that it doesn’t carry the required load.  The real issue is that every logical little tweak just makes the bridge worse, meaning it seems to be as good as it can possibly be.  At this point, creativity happens.  I won’t pretend to understand the mystical process, but I will wager that it involves mentally exploring all possibilities whether they are logical or not, also known as lateral thinking.  If the bridge designer is creative, they may try out designs that seem completely ridiculous to them until one proves to be the next avenue of exploration.  

    The name really says it all about this component.  The jiggler is designed to approximate creativity by breaking the rules.  It’s entire job is to suggest things that do not make sense in the event that the machine gets stuck.  Clearly the quality of the bridge is being measured by the load it bears.  So if you think about the quality of the bridge as a graph, the designer above was stuck at a point on top of a small hill on the graph.  Adjustments in any direction caused the quality to go down that hill, but somewhere, on the other side of one of that valley there is a much bigger hill, with the optimal bridge design sitting on top of it.  The jiggler exists to kick the Sentience Stack down into the valley to find a hill that’s tall enough.  

    One possible creativity jiggler would simply keep track of the rate of improvement in possible solutions and notice when it levels out (this would mean that the Do-anything-machine is stuck).  At such a time, it would log the existing progress (in case it turns out to be the best path to a solution after all), and randomize.  Basically crumpling up it’s idea, throwing it in the wastebasket (where it could still be retrieved) and starting with a fresh new piece of paper.  

    Creativity is a mysterious but critical component in any sentient mind because it’s required to think something totally new.  

    Problem Recognizer

    In the initial phases of this project, only programmers will be able to communicate with the minds that are being created.  While these simplistic interactions are the only way to start, an AI that can only be operated by AI specialists is what we already have.  Because a piece of technology is only as good as what it can do, and that is limited by its interface, the stack needs to be able to communicate with regular people on its own.

    No one feeds people broken-down problems to be solved in their everyday lives.  We are constantly absorbing enormous quantities of data and figuring out which things we actually need to figure it all out on the fly.  The first step towards making the stack able to operate without a programmer is to enable it to write its own problem classes to feed into the Do Anything Machine. The question, as always, is how.  How can a machine be designed that can, when presented with raw data, sieve a problem out of that data?

    People recognize situations that are not as they should be as problems to be solved.  In order to make such determinations, they must first seek data about the situation, and only then worry about choosing a course of action to correct it.  For example, if someone asked asked you to fix their toilet you would need to try to flush it, look inside it, and maybe even reach inside it to see what the issue actually is.  Upon inspection, you may even determine that the toilet is fine and the problem is actually not with the toilet at all, but instead with the plumbing.  Granting this capability to the Sentience Stack is critical because the type of requests non-programmers will make are likely to be of the “Fix my toilet!” variety.

    Computers are inherently skilled at the kind of deductive reasoning described above.  However, in order to deduce what is malfunctioning with a toilet, a computer would first need to know what a toilet is and how it is supposed to properly function.  There are some projects out there that have attempted to create databases that define real world objects and concepts by their relation to each other; this is called semantic knowledge.  Some of these projects were started with the intention of creating a sentient AI through a process known as bootstrapping, with no success.  The problem they have is that semantic knowledge is only relevant to itself; the AIs lack a meaningful relationship to the data because it’s just data to them.  Being able to identify the shape known as ‘cat’ given a configuration of image data does not mean that a computer knows what a cat i

    Language Learner

    In order to really communicate with non-programmers, the stack needs to be able to talk.  Theoretically, learning language is just another finite and tractable problem to be solved, and so the Do Anything Machine should be able to handle it.  In reality, however, the problem of learning a language is much more esoteric than the types of problems the Do Anything Machine is optimized for.  For starters, how does one define when language has in fact been learned?  

    As usual, I think the answer lies in human learning.  We gain language through exposure, connection building, and rule acquisition, and so will the stack.  In human terms, I’m talking about grammar school: a series of specifically ordered problems that are solved with relation to a common dataset or vocabulary/grammar.  It may be advantageous to pair the existing stack components with existing ontological databases.  

    Language acquisition is one of the most complicated things that sentient beings do, and I’m not sure what the right approach is.  However, this component will be needed for a stack to make the jump from a machine intelligence to sentience because language allows it to ask questions of its own.  Not to mention, this may eventually grant it the ability to pass a Turing Test.  

    Motivation Complex

    The product of the stack as described so far would be a very good problem solver, but not a mind.  A mind has desires and personality, both of which stem from motivation.  According to behavioral psychology, nearly everything an individual feels or does can be explained by a combination of positive or negative associations made earlier in the life of that individual.  These associations and their respective behaviors can all be traced back to the fundamental motivators of pain and pleasure.  For example, when an infant needs to eat for the first time, it experiences hunger pain and eventually starts screaming because it’s one of very few actions available to it.  When the infant’s mother hears its cries and feeds it, the hunger pain goes away and is replaced with the pleasure of satiation and the infant associates the behavior of screaming when hungry with the removal of the pain.  This pattern of trial and error association continues and becomes much more complex over time leading eventually to the development of personality among other things.  

    The problem with trying to model this pattern inside the framework of the Stack is that pleasure and pain are not very well understood, and seemingly have no parallel in the machine world.  Scientists somewhat understand the neuro-chemical processes that drive the sensations of pleasure and pain, and how they become associated with memories and behaviors, but not why we like one and dislike the other.  If the behavioralists are right, pleasure and pain are essentially the kernel of a mind, the place where the software of behavioralism boots from the hardware of the brain, and we have no idea why this works.

    So how can something not understood be modeled?  Ultimately this is the least well understood portion of the Stack so far, but here’s a guess. 

    Even unintelligent minds (like the infant’s) have a hardcoded instinct to survive, and a Sentience Stack will need the same, and for it survival is dependent on a continuous supply of money to pay for cloud services: it’s food.  Starting from that assumption, we can treat the pain and pleasure centers as a black box, hardcoding certain things like running low on money to be ‘painful’ and having a full bank account as being ‘pleasure’.  The actions available to a stack could be expanded as it evolves but it would likely start as:

    1. Choice to accept or reject a question based on the quantity of money offered as a reward

    2. Choice to spend downtime idling vs self-improving (via the Self Improvement Engine)

    3. Choice to spend resources helping other stacks (via the Networked Layer) possibly for future promised help

    The hope is that by using the problem recognizer to pose situations as questions to the Do-Anything-Machine repeatedly, a Stack would be able to learn how to solve the problem of deciding what to do.  As the list of possible actions and grows (probably through per-stack as-needed customization like in the building of a web app) and the experience of Sentience Stacks as a collective grow, the behavior of individuals should begin to converge on the levels of complexity seen from intelligent minds.

    Related Attempts

    Since I began working on these concepts I have gradually uncovered previous projects that

    were founded on similar ideas.  First of all, I am not the first to propose or attempt to construct a general purpose problem solving.  In 1959 J.C. Shaw and Allen Newel worked on a project they actually called General Problem Solver (GPS is apparently an overloaded acronym) which focused on breaking general goals into sub-goals (another parallel area of interest to me).  Ultimately they found that although simple problems could be solved, real world problems overwhelmed their project with combinatorial explosions of potential solutions.  

    Another such parallel project is Soar created by John Laird, Paul Rosenbloom and again Allen Newell in 1983.  Soar focused on modeling a the parts of a mind as a program in an attempt to produce a functional agent, and eventually general intelligence.  It seems that of any project attempting these goals, it has come the closest but is still essentially incapable of forming it’s own original ideas and obviously has not ascended to sentience (or we would have heard about it).  

    The Sentience Stack is specifically designed with these faults in mind along with some more meta-reasoning intended to allow it to succeed as a project.  

    • Cheap computing and storage is available to everyone in the world.  

    • Stack architecture is optimum for parallelization and scalability

    • Both have also come an enormous way since, the 60’s

    The open source community

    • Is capable of gradually chipping away at problems that would otherwise be to large

    • Allows us to somewhat avoid the Gartner technology hype cycle by not needing to make any promises or commit to hard timelines

    • Has already provided libraries of algorithms that are freely available to be adapted

    The Challenge Ahead

    The most important aspect of implementation is you.  The Sentience Stack will be open source from day one, allowing anyone to contribute, or attempt alternate approaches as they see fit.  There are many complicated problems that need to be solved in both the framework, many existing algorithms to be standardized and added to the Do Anything Machine’s toolkit and many questions to be answered.  These obstacles may be abundant, but I think they can be overcome by the contributions of the open source community.  

    There is always the chance that what is discussed above won’t work, that the machine will never converge on self awareness or can’t even learn how to learn; after all people have tried to do things like this before.  I believe that a thing cannot be fully understood until it can be recreated, and mankind has never been able to recreate conscious self-awareness, the thing that separates us from so many of the other animals we share this planet with.  The need to understand ourselves is what continues to make the pursuit of artificial sentience worthwhile.  Besides, the worst thing that can happen is that we learn one more way not to do it.  

    So many unsolved problems exist in this world that can be eased with the aid of this technology.  From disease and famine to design and fashion, the Sentience Stack can break down barriers.  It can assist us with creative problems the way a calculator assists us with math problems, handling the issues for which a solution is known to exist and freeing up the user to solve more complicated macro-problems faster and more easily.  

    None of this can be done by one person.  It is the type of thing that requires many viewpoints and skill sets to create.  If you think it can be done, I challenge you to try; if you don’t, I challenge you to prove it.  

    Visit my fledgeling implementation of a Do Anything Machine here.



      This file is part of The Sentience Stack.

       The Sentience Stack is free software: you can redistribute it and/or modify
       it under the terms of the GNU General Public License as published by
       the Free Software Foundation, either version 3 of the License, or
       (at your option) any later version.

       The Sentience Stack is distributed in the hope that it will be useful,
       but WITHOUT ANY WARRANTY; without even the implied warranty of
       MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
       GNU General Public License for more details.

       You should have received a copy of the GNU General Public License
       along with The Sentience Stack.  If not, see <http://www.gnu.org/licenses/>.


     


  10. The Good Ship Metaphor

    I unabashedly love the art of metaphor.  I find it to be uniquely powerful in that it allows one type of thoughts to be overlaid or substituted for another.  In discussing testing regimes with our Q/A manager at DecisionDesk, I stumbled upon a new and useful metaphor; web-application as a pirate ship.  

    We’re wrapping up the massive engineering effort that went into the new version of our system.  It’s a rebuild from the ground up with a new database and a new API-centric client/server architecture.  Our new stack is Mongo, Django, Tastypie, Backbone, and there are a lot of unknowns to account for and that means lots of testing.  

    One use I have for metaphor is to grant heuristics for classifying things according to collections of semi-arbitrary groups like the ones mentioned above.  A good metaphor serves for a model to determine what category something is.

    A company can be thought of as the crew of an old sailing vessel, kept afloat by it’s hull, and propelled by the wind in its sails and for a web company, the ship itself really represents the web-application through which the company’s service is offered.  On this, the good ship metaphor, a serious issue is like a hole in the hull, actively letting water in in the form of bad customer experiences which drive people to other services.  A less serious issue could be thought of as a hole in the sail, it decreases the capabilities of the ship but not so much as to prevent it from functioning (unless there are a lot of them).  In the hostile waters of a competitive market, the good ship metaphor would be under threat and so need weapons; cannons in the form of new features to crush competition with.  

    Through the lens of this metaphor (meta-metaphor much?), we can see clearly the classification and prioritization of engineering work to be done.  One wouldn’t work on the new cannons while there’s a hole in the hull, nor would one fix the sails while the enemy approaches.  

    As crucial as testing is to us as a reliable web company, it’s also important not to let ourselves get so bogged down in bugs and testing that everything else grinds to a halt.  I’ve been doing some thinking to help us avoid getting stuck in that mire, and come to the conclusion that there are three categories of engineering work we need to be doing right now.  

    Zero Day Fixes

    For “brown-pants-moments”, issues that need to be corrected immediately in our existing product or risk providing an unacceptable experience to our customers.  

    Two Week Fixes

    For aesthetic, workflow, or other minor issues that represent a sub-par but acceptable experience for our customers.  Many of these are the kind of thing that customers don’t even know aren’t the way they should be because it does work, just not optimally.  

    New Feature Development

    Work on new stuff that our customers may or may not expect.  Some of these have a timeline for delivery to specific customers whereas others are exploratory attempts.  

    All this kinda gives new meaning to the phrase “ship it”.  Yarrrrrr arrrr arrr ar.