[nupic-theory] Proposed temporal pooling mechanism

Fergal Byrne fergalbyrnedublin at gmail.com
Wed Aug 27 03:08:31 EDT 2014


Jake,

Your reasoning is 100% correct, but one of your assumptions is not:

...the choice of which columns to activate is completely independent of
predictions and bursting etc

This is currently how NuPIC works, and it's how the White Paper describes
things, but it's a simplification too far and it does not match the way
that neocortex works. The change of pace as you rise in the hierarchy
derives directly from the removal of this simplification. You need to have
every cell add the predictive potential (from distal dendrites) to the
feedforward (from proximal), and then perform inhibition.

If you do this, my model demonstrates how all this follows.

Regards

Fergal


On Wed, Aug 27, 2014 at 5:58 AM, Jake Bruce <jakebruce at live.com> wrote:

> Oh and I forgot to discuss topology.
>
> So let's assume columns in a parent layer receive feedforward input in a
> distribution centered on a particular location in the input. Let's also
> assume that distal connections are allowed to do this.
>
> Now I can see how a parent layer/region might learn that particular cells
> of its input represent the same feedforward input feature but in different
> contexts, and cells may learn that certain distinct distal connections
> always fire on the same input, but different contexts. That all makes sense.
>
> But unfortunately topology doesn't help the column activation pattern
> change more slowly if its input is changing quickly: since it's determined
> entirely by the features in the input. If those change fast, its
> representation changes fast, topological or not. And distal connections
> can't slow anything down if the columns are changing fast from proximal
> ones.
>
> Jake
>
> ------------------------------
> From: jakebruce at live.com
> To: nupic-theory at lists.numenta.org
> Subject: RE: [nupic-theory] Proposed temporal pooling mechanism
> Date: Wed, 27 Aug 2014 01:50:07 -0300
>
>
> Hi Fergal and all,
>
> I've been trying to integrate this into my mental model of HTM. I may be
> confused on a few points, because it's not falling together for me yet.
>
> Let's assume all inputs are non-topological, for simplicity. That is, if
> proximal or distal connections form between layers/regions, they are
> equally likely to connect to any cell in the target region. Later I'll
> consider topology.
>
> The feedforward pipeline goes like this, I believe:
>
> (0) Raw input in its natural form ->
> (1) SDR encoder ->
> (2) Primary L4 ->
> (3) Primary L3 ->
> (4) Secondary L4 ->
> (5) Secondary L3 -> ...
>
> and so on up the hierarchy. I've left out L5 and L6 because they don't
> appear necessary at this stage.
>
> We know that at (1), the SDR bit pattern often changes drastically between
> timesteps. Since (2) is receiving standard feedforward input from (1), we
> know that the pattern of column activation will change just as drastically
> as its input. We've all seen an HTM region at work: it's a sea of shifting
> bits, and it can change drastically even when its input changes slightly.
>
> As I understand it, the choice of which columns to activate is completely
> independent of predictions and bursting etc. So regardless of the
> predictive activity of the cells of (2) and its feedback from higher
> regions, the pattern of column activation will change as fast as its input.
>
> Let's step up the hierarchy to pipeline item (3), the Primary L3 that
> receives its feedforward input from the cells in (2), Primary L4. I've
> shown that the column activation pattern of (2) will change as fast as the
> input encoder. Now we look at the feedforward input that (3) receives from
> (2), and see that it is changing fast. By the same reasoning, the column
> activation pattern of (3) will change fast. If the column activation
> pattern changes quickly, the pattern of activation of the cells of those
> columns must change quickly, so the input the layer sends must change
> quickly.
>
> And the same goes for (3) to (4), (4) to (5), etc. up the hierarchy.
>
> So I don't believe a stable top-down representation will ever form with
> standard feedforward propagation. But despite that, let's assume that a
> stable representation of a sequence in Primary L3 does form in Secondary
> L3, and it is being transmitted down to Primary L3 as purely predictive
> distal connections and never proximal ones (again please correct me if I'm
> wrong).
>
> Now, assuming a static top-down signal, we have two possibilities:
>
> If the column activation pattern in Primary L3 is changing quickly, then
> regardless of the top-down signal, the output it sends up the hierarchy
> will be changing quickly.
> If the column activation pattern in Primary L3 is changing slowly or not
> at all, then the cells that were depolarized by the top-down signal will be
> roughly constant, and the output it sends up the hierarchy will change
> slowly.
>
> So I can see how a layer will change slowly if its input changes slowly,
> but at the very bottom it doesn't, and unless my reasoning is incorrect,
> there's no opportunity for it to get slower as it goes up the hierarchy.
>
> In a framework as full of subtle emergence as HTM, this may be the
> subtlest I've encountered. It's so subtle that I'm not yet convinced it
> happens. ;)
>
> Jake
>
> ------------------------------
> From: fergalbyrnedublin at gmail.com
> Date: Tue, 26 Aug 2014 18:02:21 +0100
> Subject: Re: [nupic-theory] Proposed temporal pooling mechanism
> To: nupic-theory at lists.numenta.org
>
> Yeah, that's why you need to read the last one (addressed to Dave) first -
> the second last one (addressed to Jake) doesn't make sense until you've
> read the one I forgot to hit send on!
>
>
>
>
> On Tue, Aug 26, 2014 at 5:59 PM, cogmission1 . <cognitionmission at gmail.com
> > wrote:
>
> No problem, I'm still absorbing what you wrote before...
>
>
> On Tue, Aug 26, 2014 at 11:45 AM, Fergal Byrne <
> fergalbyrnedublin at gmail.com> wrote:
>
> Nuts, those emails are out of order. Apologies!
>
>
> On Tue, Aug 26, 2014 at 5:44 PM, Fergal Byrne <fergalbyrnedublin at gmail.com
> > wrote:
>
> Hi Dave,
>
> Good questions. First, we need to be careful about the term "sequence".
> I'll explain things in terms of the kinds of "sequences" we have, as this
> is pretty crucial to understanding how Temporal Pooling works.
>
> In current CLA, the sequence is an exact, high-order sequence, with each
> component SDR appearing in the same order. NuPIC automatically learns these
> sequences if you can signal a reset (to tell the CLA that a new sequence
> has just started), and often without a reset. If you're in such a sequence,
> you get the extra sparse cell-SDR of one-cell-per-column activity. If you
> break anything about the sequence, you get lots of bursting and a high
> anomaly score.
>
> In the Sensorimotor CLA with Temporal Pooling, it's completely different.
> L4 learns to recognise each sensorimotor feedforward input and form a
> noise-tolerant SDR for each. If the transition between one such input and
> the next, in the context of the motor command, has been seen enough before,
> then the post-transition SDR is highly sparse. L3 cells which have a
> constant L1 input (and which thus are more likely to become active each
> timestep) will learn to become sensitive to each successive L4 SDR in a
> "sequence-like set". This sequence-like set can vary from a completely
> unordered set with equally probable motor transitions, to a semi-ordered
> semi-sequence (such as the eye-nose-eye-mouth-hair-eye... saccading when
> looking at a face), to a highly ordered sequence such as saccading while
> reading text quickly. The degree of strict ordering depends on the use
> case, practice, and other factors, and sometimes on the sensory data having
> its own ordering.
>
> The important thing is that L4 only learns individual transitions. The
> predictability of each transition, and the way they chain together, are
> properties of the data and the way that behaviour interacts with the data.
> The kind of sequence/set is a derivative property of this predictability,
> and it's not explicitly visible in L4. There's only recognition of
> sensorimotor patterns (A, B, M, N etc) and successful or unsuccessful
> prediction of transitions (sparse or bursting columns).
>
> It's in L3 that you see the representation of the sequence-set (of items
> recognised in L4), one SDR per "set". Each L3 column tries to recognise
> successive members of sequence-sets in L4 when primed by an even
> slower-changing SDR S presented in L1 by a higher-level region. In my
> example, columns which will present an SDR X in L3 learn to respond to
> successful transitions from members of {A, B, C}, while those for SDR Y in
> L3 respond to successfully predicting L4 cells which are representing
> members of {A, M, N}.
>
> In L3 you also get the higher-order sequence memory we see in the old CLA.
> In my example, X->Y is part of some sequence, say WXYZ. This sequence, if
> it's been seen enough, will be associated in the higher region with the
> single, stable SDR S being fed back into L1 in this region. The transition
> from X to Y in this region's L3 will have an analogue motor command in this
> region's L5, both the Y and the X->Y related motor command will be copied
> up to the higher region. Note that L3 is already changing at the slower X-Y
> rate, while L4 is changing for every input A->B, B->C etc. and this slowing
> is repeated within each region as you go up the hierarchy.
>
> You might imagine an example where L4 is fed strokes or features in
> letters along with microsaccades which look around each letter, and each
> transition is pair: a feature and the microsaccade which moves to that
> feature. If each of your transitions is well-predicted, L4 is successfully
> picking up all the features of that letter. You'll then have L3 columns
> which will each be sensitive to the cells in L4 which are members of the
> set of strokes for the letter "q". This region will, as a whole, recognise
> and represent one letter at a time. Its L3 will also learn high-order
> sequences of letters from seeing the component strokes of each letter, by
> experiencing each letter in turn, and learning to predict the next
> preferentially (in this case "u" would be likely next) in the context of a
> "word" region above. L5 in turn will generate mostly microsaccades and
> occasionally "next-letter" saccades which cause the L4 to scan the next
> letter.
>
> The "word" region above will receive letter-SDRs from the stroke-letter
> region's L3, along with the "next-letter" motor saccades generated by its
> L5. Its L4 will learn valid letter and letter-sized saccade transitions
> "q->u", and its L4 columns will work on recognising which "sequence-set" of
> letters it's dealing with (in this case, which written word the letters are
> part of). The high-order sequence in the higher region's L3 is a sequence
> of word SDR's, and the next-higher region will thus receive words (along
> with word-sized saccades) and represent them in turn as phrases or
> sentences.
>
> I'll deal with bursting and its propagation shortly.
>
> Regards,
>
> Fergal Byrne
>
>
>
>
> On Tue, Aug 26, 2014 at 2:19 PM, cogmission1 . <cognitionmission at gmail.com
> > wrote:
>
> Strike that last RE:
>
> That wasn't written as intended. It should have read...
>
> Re: Line of inquiry #2, let it be understood that XYZA when representing a
> *name* is *vastly* different from XYZA the sequence. Both XYZA and YZA
> names are different distinctions altogether (though they may have a parent
> child relationship or some other). While XYZA and YZA the sequence are part
> of the same thing...
>
>
> On Tue, Aug 26, 2014 at 8:01 AM, David Ray <cognitionmission at gmail.com>
> wrote:
>
> Hi,
>
> What an interesting discussion! Fergal, thank you for that very clear,
> cohesive discussion and also for the "call for standardizing our
> terminology, symbols and discussion framework".
>
> Reading over your thoughts, causes me to form two lines of inquiry.
>
> 1.) When "S"-ness is lost due to current feed forward input in t +1 not
> being predicted by Layer 4 in lower region, causing column bursts in Layer
> 4, to forward the SDR directly up to Layer 3. What is the nature of the
> message from Layer 3 to the next higher Region's Layer 4 input? Is the
> unpredicted SDR passed continuously up the hierarchy until a super
> contextual region at the top recognizes the SDR as part of some super wide
> pattern? -OR- is totally unexpected?
>
> 2.) The elephant in the room. How does the pattern XYZA get recognized as
> such by the Upper Region when the pattern starts at YZA?
>
> Re: Line of inquiry #2, let it be understood that XYZA when representing
> *name* is *vastly* different from XYZA the sequence because YZA although
> related, is a different distinction altogether, from YZA the sequence part.
>
> Sent from my iPhone
>
> On Aug 26, 2014, at 2:44 AM, Fergal Byrne <fergalbyrnedublin at gmail.com>
> wrote:
>
> Hi Jake & Felix,
>
> Thanks for kicking off this discussion. Before I throw my own thinking in
> to the mix, let's agree that we need to use consistent terminology for the
> discussion. Regions, layers, columns, cells, activation etc all have
> specific meanings in HTM, and we need to keep with those.
>
> The Sensorimotor CLA theory has two main parts, one involving a
> feedforward pathway of sensory and motor inputs into L4, through L2/3 and
> up the hierarchy, the other involving L5 and L6 generating feedback and
> behaviour. Your questions concern the former, so let's restrict ourselves
> to that.
>
> In L4, the feedforward input contains incoming afferent "sensory" data
> (could be from lower in the hierarchy) and a copy of associated efferent,
> "motor" (behaviour) signals, also coming from lower regions or actual
> subcortical motor centers. L4 learns the transitions between successive
> sensory inputs as associated, caused or explained by the atomic behaviours
> which accompany them.
>
> Let's look at L4 at time t, assuming it's in its normal, successfully
> predicting, non-bursting mode. Looking just at columns, we see a sparse
> (2%) pattern of activation, call that SDR A. Because L4 is non-bursting,
> the cell activation pattern will be even sparser than 2%, let's say 0.2%
> (one cell per 10-cell column). We'll call the cell-SDR A1 - the same
> columns as A but a particular choice of cells in each column.
>
> These cells project axons up to L3, which enervate the proximal dendrites
> of cells in L3. At the same time, a higher region is projecting a pattern
> representing a more abstract, slower-changing SDR S into L1. The L3 cells
> have apical (distal) dendrites collecting signals up in L1, so a subset
> will be depolarised by their learned connections with the pattern on L1.
> The L3 cells which have both high learned recognition of the L4 SDR and
> high depolarisation due to signals from L1 (in addition to sequence-forming
> lateral connections in L3 itself) will be first to fire and will form the
> SDR in L3. Lets call that SDR X.
>
> Note that X in L3 is sent up into the higher region, and, being a member
> of S, will cause S to continue being sent back down to L1 in this region.
>
> Now, still at time t, we have SDR A/A1 on L4, the active cells of which
> will cause a pattern of predictive depolarisation across L4. This pattern
> will be a union of a small number of favoured predictions which have
> succeeded A/A1 in the past, say B1, C1, A2 (members of a "sequence"
> containing A, B, C, which is strongly associated with X in L3), and less
> strongly M1, N1 (members of another "sequence" AMN in L4, Y in L3). L4 is
> effectively pre-sensitised for particular sensorimotor transitions.
>
> Time moves on from t to t + 1. We get a new sensory input, and an
> associated motor command. The cells representing B1 are already depolarised
> and also happen to respond strongly to the new sensory and motor inputs, so
> they predict correctly and become active (there's nothing special about
> "B1", it's just what we're calling this next SDR).
>
> In L3, cells X are still receiving the "constant" top-down signals S via
> L1, and having seen this transition A1->B1 before within the context {A1,
> B1, C1...}, will become active again (or stay active) and continue to
> represent X1. They would have continued to do so if the input to L4 at t+1
> was any other member of {A1, B1, C1, A2..}, because they have proximal
> dendrites collectively receiving signals from all of these L4 cells.
>
> If, instead, the input at t + 1 to L4 is more strongly associated with M1
> than any of ABC, then many columns in M will burst as they weren't so well
> predicted by A1. Among the cells in L3 which receive signals from these M
> cells will be those we called Y earlier (Y is related to {A, M, N}). If Y
> is also part of sequence S in the higher region, and in particular if X1Y1
> is part of a higher-order sequence in S, then Y cells will be strongly
> depolarised when M cells burst, making it highly probable that L3 will show
> Y at t + 1.
>
> Y will again be projected upwards, and, being another member of S, will
> cause S to be projected back down into L1 in this region.
>
> If, on the other hand, Y is neither a learned successor to X, nor
> associated with S, then most or all cells in Y columns will burst (for lack
> of predictive input), and any S-ness in the higher region will also be
> disrupted.
>
> Adding back in the behaviour part: each region will also be learning how
> behaviour it generates affects the sensory stream it receives. The region
> under study will be generating behaviour in L5 as a result of receiving
> current sensory and motor inputs from below (as L4), as well as current
> stable state X from L3 and current higher-level state S from L1. All these
> signals will combine to drive L5 to generate behaviour SDR X* which leads
> to continuity of X if probable enough, or at least continuity of S by
> causing transition A->M and thus X->Y, for example.
>
> Since in normal circumstances L3 is sending a steady X, and L5 is sending
> X-compatible motor commands up to L4 in the higher region, L4 will happily
> predict X/X* (or Y if X ends), and L3 will continue to show S. (I'm using S
> for both the L3 and L6 SDRs for simplicity, but in the steady state they'll
> both be representing the one "thing" and will change in lockstep).
>
> Note (as I've said in the past) that there's lots of magic going on here,
> with all these sets of cells "just happening" to receive lots of signals
> from one another. This is, in fact, not magic, but a direct result of the
> statistics of these large recursive networks. Each layer begins with enough
> random potential connections that the predictive connections will weakly
> coincide with the feedforward, and any time this happens to activate cells
> (helped by a little noise) those connections are strengthened, eventually
> leading to the "magic" described above. The statistics back this up, but
> the best proof is that brains do work, and they couldn't be doing it any
> other way!
>
> Regards,
>
> Fergal Byrne
>
>
>
>
>
> On Tue, Aug 26, 2014 at 6:03 AM, Jake Bruce <jakebruce at live.com> wrote:
>
> Hi Felix,
>
> I'm not quite sure myself on the details, but I have a few competing ideas
> that each have their merits and problems.
>
> > are you proposing that C1 would remain active indefinitely until it
> received bursting inputs, i.e. even when it had no inputs at all?
> > Or would its activation continue only one time step in the absence of
> feedforward input?
>
> I'm proposing that C1 remains active until another stronger activation
> displaces it. So a competing cell that receives some bursting input is
> likely to supersede it, and by my thinking, that lines up with our desire
> for the parent patterns to "stick" while the child patterns are
> well-predicted. We would decay this required activation over time, so the
> winners don't win forever.
>
> > 1.  Since TP cells remain active even when they are not forced by any
> feedforward inputs, how do they get re-activated again next time the
> sequence is seen?
> > Specifically, if some TP ("name") cells were defined by feedforward
> inputs from the first state in a sequence, how would the same cells be
> activated if we
> > subsequently started from the second state of the sequence, skipping the
> first?
>
> The second desirable property that I see in this
> "sticking-until-displaced" mechanism is that even if we start at different
> parts of the sequence, once the parent sees several timesteps of the child
> sequence, its activation pattern will converge to the strongest activation
> patterns in the sequence as a sort of "sequence fingerprint". That is,
> until the sequence ends and the bursting child columns cause strong
> activations to "reset" the layer to begin encoding the fingerprint of the
> next sequence.
>
> So for your example, let's say in the entire length of the sequence, p, s,
> and x have the greatest feedforward overlap with L1 input. Then if you
> start at the beginning of the sequence, by the end you'll have the
> activation pattern [p,s,x]. If you start in the middle, the fingerprint of
> the sequence degrades a little to, say, [t,s,x]. It would hopefully retain
> enough of its commonality with the original sequence, but I haven't proved
> that yet. Given the resilience of SDRs to aggressive subsampling, I'm
> optimistic.
>
> > Another comes to mind that combines those two: A TP cell remains active
> until sufficient activation in its local neighbourhood from bursting
> columns forces it to turn off.
>
> This is definitely along the lines of what I'm thinking, and should
> operate in roughly the same way. It's definitely important to think of this
> as a local process as opposed to a layer-wide regime. The layer-wide
> behavior comes for free, and more flexibly, by starting from a cell's
> perspective. Certainly illustrative to speak from the layer's perspective
> as well.
>
> Perhaps that's more clear. Perhaps less!
>
> Jake
>
> ------------------------------
> From: felix at nfrac.org
> Date: Tue, 26 Aug 2014 12:26:44 +0800
> To: nupic-theory at lists.numenta.org
> Subject: Re: [nupic-theory] Proposed temporal pooling mechanism
>
>
> Hi Jake
>
> Thanks for starting this discussion. Your proposal is broadly consistent
> with my understanding based on Jeff's talks etc. However the details are
> tricky. Just to clarify, are you proposing that C1 would remain active
> indefinitely until it received bursting inputs, i.e. even when it had no
> inputs at all? Or would its activation continue only one time step in the
> absence of feedforward input?
>
> There are three questions in my mind about Temporal Pooling (TP). I'm
> using a high-order sequence scenario to explain these but I think the
> questions apply to the sensori-motor scheme also.
>
>
> 1.  Since TP cells remain active even when they are not forced by any
> feedforward inputs, how do they get re-activated again next time the
> sequence is seen? Specifically, if some TP ("name") cells were defined by
> feedforward inputs from the first state in a sequence, how would the same
> cells be activated if we subsequently started from the second state of the
> sequence, skipping the first? I can guess two alternative solutions:
>
>  (a) columns containing TP cells grow a wide spectrum of proximal
> connections encompassing all lower-level states in its sequence.  Then the
> TP cell could in fact be activated by feedforward inputs from any part of
> its sequence. It seems like it would need to use a more promiscuous
> learning mechanism than normal to achieve this (less negative reinforcement
> of proximal synapses) and would maybe end up losing the signal in the noise.
>
>  (b) allow distal depolarisation to cumulatively trigger activation of TP
> cells. i.e. if there is enough lateral (predictive) input to a certain TP
> cell, say over 2 or more time steps, then it becomes active even without
> proximal forcing.
>
>
> 2. How do we limit the number of active cells to a sparse 2% while
> maintaining the activation of Temporal Pooling cells?  If we keep
> accumulating more active TP cells over a long sequence it would get too
> dense. Presumably, active TP cells do inhibit their neighbours just like
> other active cells, but it's not clear who gets priority. This is related
> to the next question:
>
>
> 3. For how long do TP cells stay active? Jake you gave two proposals on
> this, both from a cell's perspective. In Jeff's "fireside chat" at the
> hackathon he seemed to present it from a whole layer's perspective. Though
> maybe that was just to explain the idea.
>
>  (a) Whole layer perspective: TP cells remain active while the lower-level
> layer remains predictable (say less than 50% of active columns are
> bursting). As soon as the input becomes not predictable (> 50% bursting?)
> any TP cells turn off.  This seems flawed in that a small subset of the
> inputs could be predictable and thus poolable, but would not be in this
> model unless it reached some arbitrary threshold. Also a biologically
> realistic model probably should be defined locally not globally.
>
>  (b) Cell perspective: Jake's two proposals. Another comes to mind that
> combines those two: A TP cell remains active until sufficient activation in
> its local neighbourhood from bursting columns forces it to turn off.
>
>
> An example.
>
> Let's say the lower-level layer L1 has learned a sequence O-A-B-C-D.
> Temporal Pooling to "name" this sequence will occur in the higher-level
> layer L2. To start, L1 correctly predicts the inputs for state A and this
> activates cells p,q,r in L2. Cells p,q,r are therefore "TP cells". On the
> next time step state B is predicted in L1.  Of course p,q,r remain active
> in L2 because they are TP cells. But the newly active cells in L1 also
> activate cells s,t,u in L2. These are also candidates to be TP cells.
>
> What happens next is less clear. State C is correctly predicted in L1.
> This causes feedforward activation of cells v,w,x in L2. But do the TP
> cells remain active from A or B or both? And do v,w,x also become active or
> are they potentially inhibited? This is my question 2 above.
>
>
> in       L1         L2 feedforward       + L2 Temporal Pooling cells
> ------------------------------------------------------------------------
> O    bursting     bursting
> A    a1,a2,a3     p,q,r
> B    b1,b2,b3     s,t,u                    p,q,r
> C    c1,c2,c3     v,w,x                    s,t,u + p,q,r (?)
>
>
> My question 1 above applies here as "how do p,q,r become active when we
> present the sub-sequence B-C-D?"
>
>
> As should be obvious I don't know much neuroscience. I'm looking forward
> to being enlightened in due course.
>
> Felix
>
>
>
>
> On 24 August 2014 13:29, Jake Bruce <jakebruce at live.com> wrote:
>
> Hi folks,
>
> The new temporal pooling (name-for-a-sequence) mechanisms have been
> mentioned a lot on this list, but the new developments haven't been
> discussed much in detail. I suppose these developments happen primarily at
> Numenta's physical location. As a result of the sparse discussions I've
> been doing some independent work, and I've come up with a pooling mechanism
> that may be identical, similar, or unrelated to the official new temporal
> pooling. I'm hoping for some feedback on biological and
> information-theoretic plausibility, and maybe some insight into the
> official concepts in development.
>
> Some of the discussion by Jeff and others mentioned that the new
> mechanisms operate on the proximal segments in a feedforward manner, so
> here's my feedforward approach to temporal pooling.
>
> - All the cells of the child region send feedforward input to cells of the
> parent region.
> - Assuming we are at time step t, with an SDR formed as per usual, from
> activation of child cells.
> - For time step t+1, active parent cell C1 only changes state if enough of
> its input bits were from bursting (unpredicted) columns in the child region.
>
> A closely related but different approach: parent cell C1 could keep its
> active state until a cell C2 comes along with an activation stronger than
> the one that activated C1, and replaces it. The usurping activation
> threshold would slowly decay, to avoid deadlock.
>
> The way I see it, with a design like this, parent SDRs will represent the
> attractor states derived from a child sequence, so while they may not
> uniquely name the sequences, they might be statistically related enough to
> enable the desired sequence-naming behavior. And this would have the
> desired property of parent patterns changing slowly while child sequences
> are predicted, but quickly ("bubbling up" the hierarchy) when not. And
> since each cell decides whether to change state independently, the
> moment-to-moment difference between changing patterns is on a continuum
> depending on the novelty of the sequence, rather than all-or-nothing. That
> seems like a good thing.
>
> Checking the number of cell activations that come from bursting columns
> strikes me as biologically unlikely, but having a refractory period where a
> cell must be beaten by an even stronger cell before it gives up its active
> state seems more plausible.
>
> What do people think of this: does it seem reasonable?
>
> And to the folks at Numenta, does this resemble the new temporal pooling
> mechanisms at all?
>
> Cheers,
>
> Jake
>
> _______________________________________________
> nupic-theory mailing list
> nupic-theory at lists.numenta.org
> http://lists.numenta.org/mailman/listinfo/nupic-theory_lists.numenta.org
>
>
>
>
> --
> Felix Andrews / 安福立
> http://www.neurofractal.org/felix/
>
> _______________________________________________ nupic-theory mailing list
> nupic-theory at lists.numenta.org
> http://lists.numenta.org/mailman/listinfo/nupic-theory_lists.numenta.org
>
> _______________________________________________
> nupic-theory mailing list
> nupic-theory at lists.numenta.org
> http://lists.numenta.org/mailman/listinfo/nupic-theory_lists.numenta.org
>
>
>
>
> --
>
> Fergal Byrne, Brenter IT
>
> Author, Real Machine Intelligence with Clortex and NuPIC
> https://leanpub.com/realsmartmachines
>
> Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
> http://euroclojure.com/2014/
> and at LambdaJam Chicago, July 2014: http://www.lambdajam.com
>
> http://inbits.com - Better Living through Thoughtful Technology
> http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne
>
> e:fergalbyrnedublin at gmail.com t:+353 83 4214179
> Join the quest for Machine Intelligence at http://numenta.org
> Formerly of Adnet editor at adnet.ie http://www.adnet.ie
>
> _______________________________________________
> nupic-theory mailing list
> nupic-theory at lists.numenta.org
> http://lists.numenta.org/mailman/listinfo/nupic-theory_lists.numenta.org
>
>
>
> _______________________________________________
> nupic-theory mailing list
> nupic-theory at lists.numenta.org
> http://lists.numenta.org/mailman/listinfo/nupic-theory_lists.numenta.org
>
>
>
>
> --
>
> Fergal Byrne, Brenter IT
>
> Author, Real Machine Intelligence with Clortex and NuPIC
> https://leanpub.com/realsmartmachines
>
> Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
> http://euroclojure.com/2014/
> and at LambdaJam Chicago, July 2014: http://www.lambdajam.com
>
> http://inbits.com - Better Living through Thoughtful Technology
> http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne
>
> e:fergalbyrnedublin at gmail.com t:+353 83 4214179
> Join the quest for Machine Intelligence at http://numenta.org
> Formerly of Adnet editor at adnet.ie http://www.adnet.ie
>
>
>
>
> --
>
> Fergal Byrne, Brenter IT
>
> Author, Real Machine Intelligence with Clortex and NuPIC
> https://leanpub.com/realsmartmachines
>
> Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
> http://euroclojure.com/2014/
> and at LambdaJam Chicago, July 2014: http://www.lambdajam.com
>
> http://inbits.com - Better Living through Thoughtful Technology
> http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne
>
> e:fergalbyrnedublin at gmail.com t:+353 83 4214179
> Join the quest for Machine Intelligence at http://numenta.org
> Formerly of Adnet editor at adnet.ie http://www.adnet.ie
>
>
>
>
>
>
>
>
>
>
> --
>
> Fergal Byrne, Brenter IT
>
> Author, Real Machine Intelligence with Clortex and NuPIC
> https://leanpub.com/realsmartmachines
>
> Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
> http://euroclojure.com/2014/
> and at LambdaJam Chicago, July 2014: http://www.lambdajam.com
>
> http://inbits.com - Better Living through Thoughtful Technology
> http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne
>
> e:fergalbyrnedublin at gmail.com t:+353 83 4214179
> Join the quest for Machine Intelligence at http://numenta.org
> Formerly of Adnet editor at adnet.ie http://www.adnet.ie
>
>
>
>
>
>


-- 

Fergal Byrne, Brenter IT

Author, Real Machine Intelligence with Clortex and NuPIC
https://leanpub.com/realsmartmachines

Speaking on Clortex and HTM/CLA at euroClojure Krakow, June 2014:
http://euroclojure.com/2014/
and at LambdaJam Chicago, July 2014: http://www.lambdajam.com

http://inbits.com - Better Living through Thoughtful Technology
http://ie.linkedin.com/in/fergbyrne/ - https://github.com/fergalbyrne

e:fergalbyrnedublin at gmail.com t:+353 83 4214179
Join the quest for Machine Intelligence at http://numenta.org
Formerly of Adnet editor at adnet.ie http://www.adnet.ie
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.numenta.org/pipermail/nupic-theory_lists.numenta.org/attachments/20140827/f6bd827d/attachment-0002.html>


More information about the nupic-theory mailing list