Discussion:
[urbit] A blast from the past
James Bowery
2017-09-26 00:44:46 UTC
Permalink
Yesterday I ran across Nock because I was thinking about doing a quick and
dirty VM based on an early '80s idea I had. I figured I could dispense
with all the complicated optimizations due to 23 doublings under Moore's
law. I posted about this, too informally, in /. in 2000 under "Network
Functional Programming
<https://ask.slashdot.org/comments.pl?sid=6343&cid=930087>". Before I
started coding, I wanted to see if anyone had done anything like it since,
after all, it's been 35 years! So googling some keywords led me to
discover first Nock and then the stack which is intended to solve the
problem I had been hired to solve for AT&T and Knight-Ridder way back then
-- but that's another story
<https://tech.slashdot.org/comments.pl?sid=2702791&cid=39217853>.

I was quite excited to see the possibility that someone had finally solved
the problem, but when I discovered Nock didn't do lazy evaluation --
instead leaving it as an option to higher levels -- I realized it wasn't
going to do what I had hoped.

Let me explain the basic idea in more detail because it isn't "just" lazy
evaluation, and I'm not sure -- even after doing the searches -- that
anyone has seriously researched it. And, just to preempt -- I'm not
talking about "reactive" or "functional reactive" programming as normally
conceived either.

Normally what people think of when they hear "lazy" or "demand driven" is
that there is some kind of output required and everything happens -- only
-- in service of that observation, and once the output is generated, it's
all over until the next demand drives the evaluation again. The network
architecture I was targeting used dependency graphs required by David
Reed's NAMOS system* (eventually becoming TeaTime for Alan Kay's Croquet
system) for networked atomic actions, but the graphs were, along with
pervasive memoized values, built by lazy evaluation and remained in place,
connecting the inputs to the network to the continuously observed outputs
of the network. If one of the inputs change, the change propagates through
the dependency graphs as a data-driven or eager evaluation -- terminating
any propagation if an evaluation produced no change in its memoized
output. If an observation goes away, the reference counts are decremented
through the graphs along with the dependency links and at 0 references, the
memoized value is voided -- although not deleted as NAMOS was a write-once
or single assignment system (depending on one's paradigm) and the result is
journaled as it rolls off the system providing an audit trail and potential
for pseudo-time travel. If the eager evaluation propagates all the way to
an output and discovers it is inactive, it assumes there was a failure to
decrement the reference counters etc. by the removal of the observer and it
happens at that point.

Now, this may seem like way too much functionality to put into such a
primitive VM as Nock, but one must bear in mind that SK reduction machines
were widely considered to be a viable route for massively parallel dataflow
evaluation back in the halcyon days post-Backus's Turing Award lecture
<https://www.cs.ucf.edu/~dcm/Teaching/COT4810-Fall%202012/Literature/Backus.pdf>"".
Also bear in mind that Nock contains features that permit it to _formally_
fulfill the rigor of such horrors as Church numbers without sacrificing
much in the way of performance. You don't need to put all that machinery
everywhere -- but when you need it, which is almost all the time -- you
_really_ need it.

For example, one of the symptoms that one needs it is when one finds one
has to construct "make" like systems or "build" systems to maintain file
system version integrity. What you _really_ need is a generalized atomic
action system such as that provided by NAMOS, and a "release" of a "build"
is just the commit of an atomic action that propagates changes outside of
the temporary** "fork" in reality.

*"Naming and Synchronization In a Decentralized Computer System
<http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-205.pdf>" -- an
early form of named data networking aka content-centric networking aka
information centric networking. It turns out that Arvind and Gostellow --
down one story from Reed at the MIT LCS -- had used virtually isomorophic
data structures to Reed's NAMOS in their contemporaneous dataflow machine
(the "U-Interpreter <http://ieeexplore.ieee.org/document/1653940/>") to
perform the data-driven evaluations but neither they nor Reed were aware of
this until I pointed it out to them. Their "tagged architecture" had
"tags" that were pretty much the same as Reed's "names", to match up data
flow tokens.

**This "ease of forking reality" is, by the way, automatically accomplished
by non-determinism and is a reason people working on pure functional
operating systems find themselves going relational since functions are
degenerate relations that may be considered inconsistent "forks" as
required by the CAP theorem, among other realities.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to urbit-dev+***@googlegroups.com.
To post to this group, send email to urbit-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
James Torre
2017-09-26 00:55:36 UTC
Permalink
Not to imply you haven't already reviewed this, but a distributed graph of
dependency driven propagation of computation superficially resonates with
my understanding of Radul and Sussman's "The Art of the Propagator" [0]
(whose abstract should be sufficient to inspire or deter interest, on the
chance it is as yet uninspected).

--jpt4

[0] (pdf warning)
http://dspace.mit.edu/bitstream/handle/1721.1/44215/MIT-CSAIL-TR-2009-002.pdf?sequence=1
Post by James Bowery
Yesterday I ran across Nock because I was thinking about doing a quick and
dirty VM based on an early '80s idea I had. I figured I could dispense
with all the complicated optimizations due to 23 doublings under Moore's
law. I posted about this, too informally, in /. in 2000 under "Network
Functional Programming
<https://ask.slashdot.org/comments.pl?sid=6343&cid=930087>". Before I
started coding, I wanted to see if anyone had done anything like it since,
after all, it's been 35 years! So googling some keywords led me to
discover first Nock and then the stack which is intended to solve the
problem I had been hired to solve for AT&T and Knight-Ridder way back then
-- but that's another story
<https://tech.slashdot.org/comments.pl?sid=2702791&cid=39217853>.
I was quite excited to see the possibility that someone had finally solved
the problem, but when I discovered Nock didn't do lazy evaluation --
instead leaving it as an option to higher levels -- I realized it wasn't
going to do what I had hoped.
Let me explain the basic idea in more detail because it isn't "just" lazy
evaluation, and I'm not sure -- even after doing the searches -- that
anyone has seriously researched it. And, just to preempt -- I'm not
talking about "reactive" or "functional reactive" programming as normally
conceived either.
Normally what people think of when they hear "lazy" or "demand driven" is
that there is some kind of output required and everything happens -- only
-- in service of that observation, and once the output is generated, it's
all over until the next demand drives the evaluation again. The network
architecture I was targeting used dependency graphs required by David
Reed's NAMOS system* (eventually becoming TeaTime for Alan Kay's Croquet
system) for networked atomic actions, but the graphs were, along with
pervasive memoized values, built by lazy evaluation and remained in place,
connecting the inputs to the network to the continuously observed outputs
of the network. If one of the inputs change, the change propagates through
the dependency graphs as a data-driven or eager evaluation -- terminating
any propagation if an evaluation produced no change in its memoized
output. If an observation goes away, the reference counts are decremented
through the graphs along with the dependency links and at 0 references, the
memoized value is voided -- although not deleted as NAMOS was a write-once
or single assignment system (depending on one's paradigm) and the result is
journaled as it rolls off the system providing an audit trail and potential
for pseudo-time travel. If the eager evaluation propagates all the way to
an output and discovers it is inactive, it assumes there was a failure to
decrement the reference counters etc. by the removal of the observer and it
happens at that point.
Now, this may seem like way too much functionality to put into such a
primitive VM as Nock, but one must bear in mind that SK reduction machines
were widely considered to be a viable route for massively parallel dataflow
evaluation back in the halcyon days post-Backus's Turing Award lecture
<https://www.cs.ucf.edu/~dcm/Teaching/COT4810-Fall%202012/Literature/Backus.pdf>"".
Also bear in mind that Nock contains features that permit it to _formally_
fulfill the rigor of such horrors as Church numbers without sacrificing
much in the way of performance. You don't need to put all that machinery
everywhere -- but when you need it, which is almost all the time -- you
_really_ need it.
For example, one of the symptoms that one needs it is when one finds one
has to construct "make" like systems or "build" systems to maintain file
system version integrity. What you _really_ need is a generalized atomic
action system such as that provided by NAMOS, and a "release" of a "build"
is just the commit of an atomic action that propagates changes outside of
the temporary** "fork" in reality.
*"Naming and Synchronization In a Decentralized Computer System
<http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-205.pdf>" --
an early form of named data networking aka content-centric networking aka
information centric networking. It turns out that Arvind and Gostellow --
down one story from Reed at the MIT LCS -- had used virtually isomorophic
data structures to Reed's NAMOS in their contemporaneous dataflow machine
(the "U-Interpreter <http://ieeexplore.ieee.org/document/1653940/>") to
perform the data-driven evaluations but neither they nor Reed were aware of
this until I pointed it out to them. Their "tagged architecture" had
"tags" that were pretty much the same as Reed's "names", to match up data
flow tokens.
**This "ease of forking reality" is, by the way, automatically
accomplished by non-determinism and is a reason people working on pure
functional operating systems find themselves going relational since
functions are degenerate relations that may be considered inconsistent
"forks" as required by the CAP theorem, among other realities.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to urbit-dev+***@googlegroups.com.
To post to this group, send email to urbit-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Jacob MacDonald
2017-09-26 03:01:05 UTC
Permalink
Good to see this list getting traffic again but discussion has mostly moved
to https://urbit.org/fora. You might get more discussion if you cross-post
there.

Best,

Jacob.
Post by James Torre
Not to imply you haven't already reviewed this, but a distributed graph of
dependency driven propagation of computation superficially resonates with
my understanding of Radul and Sussman's "The Art of the Propagator" [0]
(whose abstract should be sufficient to inspire or deter interest, on the
chance it is as yet uninspected).
--jpt4
[0] (pdf warning)
http://dspace.mit.edu/bitstream/handle/1721.1/44215/MIT-CSAIL-TR-2009-002.pdf?sequence=1
Post by James Bowery
Yesterday I ran across Nock because I was thinking about doing a quick
and dirty VM based on an early '80s idea I had. I figured I could dispense
with all the complicated optimizations due to 23 doublings under Moore's
law. I posted about this, too informally, in /. in 2000 under "Network
Functional Programming
<https://ask.slashdot.org/comments.pl?sid=6343&cid=930087>". Before I
started coding, I wanted to see if anyone had done anything like it since,
after all, it's been 35 years! So googling some keywords led me to
discover first Nock and then the stack which is intended to solve the
problem I had been hired to solve for AT&T and Knight-Ridder way back then
-- but that's another story
<https://tech.slashdot.org/comments.pl?sid=2702791&cid=39217853>.
I was quite excited to see the possibility that someone had finally
solved the problem, but when I discovered Nock didn't do lazy evaluation --
instead leaving it as an option to higher levels -- I realized it wasn't
going to do what I had hoped.
Let me explain the basic idea in more detail because it isn't "just" lazy
evaluation, and I'm not sure -- even after doing the searches -- that
anyone has seriously researched it. And, just to preempt -- I'm not
talking about "reactive" or "functional reactive" programming as normally
conceived either.
Normally what people think of when they hear "lazy" or "demand driven" is
that there is some kind of output required and everything happens -- only
-- in service of that observation, and once the output is generated, it's
all over until the next demand drives the evaluation again. The network
architecture I was targeting used dependency graphs required by David
Reed's NAMOS system* (eventually becoming TeaTime for Alan Kay's Croquet
system) for networked atomic actions, but the graphs were, along with
pervasive memoized values, built by lazy evaluation and remained in place,
connecting the inputs to the network to the continuously observed outputs
of the network. If one of the inputs change, the change propagates through
the dependency graphs as a data-driven or eager evaluation -- terminating
any propagation if an evaluation produced no change in its memoized
output. If an observation goes away, the reference counts are decremented
through the graphs along with the dependency links and at 0 references, the
memoized value is voided -- although not deleted as NAMOS was a write-once
or single assignment system (depending on one's paradigm) and the result is
journaled as it rolls off the system providing an audit trail and potential
for pseudo-time travel. If the eager evaluation propagates all the way to
an output and discovers it is inactive, it assumes there was a failure to
decrement the reference counters etc. by the removal of the observer and it
happens at that point.
Now, this may seem like way too much functionality to put into such a
primitive VM as Nock, but one must bear in mind that SK reduction machines
were widely considered to be a viable route for massively parallel dataflow
evaluation back in the halcyon days post-Backus's Turing Award lecture
<https://www.cs.ucf.edu/~dcm/Teaching/COT4810-Fall%202012/Literature/Backus.pdf>"".
Also bear in mind that Nock contains features that permit it to _formally_
fulfill the rigor of such horrors as Church numbers without sacrificing
much in the way of performance. You don't need to put all that machinery
everywhere -- but when you need it, which is almost all the time -- you
_really_ need it.
For example, one of the symptoms that one needs it is when one finds one
has to construct "make" like systems or "build" systems to maintain file
system version integrity. What you _really_ need is a generalized atomic
action system such as that provided by NAMOS, and a "release" of a "build"
is just the commit of an atomic action that propagates changes outside of
the temporary** "fork" in reality.
*"Naming and Synchronization In a Decentralized Computer System
<http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-205.pdf>" --
an early form of named data networking aka content-centric networking aka
information centric networking. It turns out that Arvind and Gostellow --
down one story from Reed at the MIT LCS -- had used virtually isomorophic
data structures to Reed's NAMOS in their contemporaneous dataflow machine
(the "U-Interpreter <http://ieeexplore.ieee.org/document/1653940/>") to
perform the data-driven evaluations but neither they nor Reed were aware of
this until I pointed it out to them. Their "tagged architecture" had
"tags" that were pretty much the same as Reed's "names", to match up data
flow tokens.
**This "ease of forking reality" is, by the way, automatically
accomplished by non-determinism and is a reason people working on pure
functional operating systems find themselves going relational since
functions are degenerate relations that may be considered inconsistent
"forks" as required by the CAP theorem, among other realities.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to urbit-dev+***@googlegroups.com.
To post to this group, send email to urbit-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
James Bowery
2017-09-26 03:08:54 UTC
Permalink
FORA <http://software.bbsdocumentary.com/IBM/DOS/FORA/>, huh?
Post by Jacob MacDonald
Good to see this list getting traffic again but discussion has mostly
moved to https://urbit.org/fora. You might get more discussion if you
cross-post there.
Best,
Jacob.
Post by James Torre
Not to imply you haven't already reviewed this, but a distributed graph
of dependency driven propagation of computation superficially resonates
with my understanding of Radul and Sussman's "The Art of the Propagator"
[0] (whose abstract should be sufficient to inspire or deter interest, on
the chance it is as yet uninspected).
--jpt4
[0] (pdf warning) http://dspace.mit.edu/bitstream/handle/1721.1/44215/
MIT-CSAIL-TR-2009-002.pdf?sequence=1
Post by James Bowery
Yesterday I ran across Nock because I was thinking about doing a quick
and dirty VM based on an early '80s idea I had. I figured I could dispense
with all the complicated optimizations due to 23 doublings under Moore's
law. I posted about this, too informally, in /. in 2000 under "Network
Functional Programming
<https://ask.slashdot.org/comments.pl?sid=6343&cid=930087>". Before I
started coding, I wanted to see if anyone had done anything like it since,
after all, it's been 35 years! So googling some keywords led me to
discover first Nock and then the stack which is intended to solve the
problem I had been hired to solve for AT&T and Knight-Ridder way back then
-- but that's another story
<https://tech.slashdot.org/comments.pl?sid=2702791&cid=39217853>.
I was quite excited to see the possibility that someone had finally
solved the problem, but when I discovered Nock didn't do lazy evaluation --
instead leaving it as an option to higher levels -- I realized it wasn't
going to do what I had hoped.
Let me explain the basic idea in more detail because it isn't "just"
lazy evaluation, and I'm not sure -- even after doing the searches -- that
anyone has seriously researched it. And, just to preempt -- I'm not
talking about "reactive" or "functional reactive" programming as normally
conceived either.
Normally what people think of when they hear "lazy" or "demand driven"
is that there is some kind of output required and everything happens --
only -- in service of that observation, and once the output is generated,
it's all over until the next demand drives the evaluation again. The
network architecture I was targeting used dependency graphs required by
David Reed's NAMOS system* (eventually becoming TeaTime for Alan Kay's
Croquet system) for networked atomic actions, but the graphs were, along
with pervasive memoized values, built by lazy evaluation and remained in
place, connecting the inputs to the network to the continuously observed
outputs of the network. If one of the inputs change, the change propagates
through the dependency graphs as a data-driven or eager evaluation --
terminating any propagation if an evaluation produced no change in its
memoized output. If an observation goes away, the reference counts are
decremented through the graphs along with the dependency links and at 0
references, the memoized value is voided -- although not deleted as NAMOS
was a write-once or single assignment system (depending on one's paradigm)
and the result is journaled as it rolls off the system providing an audit
trail and potential for pseudo-time travel. If the eager evaluation
propagates all the way to an output and discovers it is inactive, it
assumes there was a failure to decrement the reference counters etc. by the
removal of the observer and it happens at that point.
Now, this may seem like way too much functionality to put into such a
primitive VM as Nock, but one must bear in mind that SK reduction machines
were widely considered to be a viable route for massively parallel dataflow
evaluation back in the halcyon days post-Backus's Turing Award lecture
<https://www.cs.ucf.edu/~dcm/Teaching/COT4810-Fall%202012/Literature/Backus.pdf>"".
Also bear in mind that Nock contains features that permit it to _formally_
fulfill the rigor of such horrors as Church numbers without sacrificing
much in the way of performance. You don't need to put all that machinery
everywhere -- but when you need it, which is almost all the time -- you
_really_ need it.
For example, one of the symptoms that one needs it is when one finds one
has to construct "make" like systems or "build" systems to maintain file
system version integrity. What you _really_ need is a generalized atomic
action system such as that provided by NAMOS, and a "release" of a "build"
is just the commit of an atomic action that propagates changes outside of
the temporary** "fork" in reality.
*"Naming and Synchronization In a Decentralized Computer System
<http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-205.pdf>" --
an early form of named data networking aka content-centric networking aka
information centric networking. It turns out that Arvind and Gostellow --
down one story from Reed at the MIT LCS -- had used virtually isomorophic
data structures to Reed's NAMOS in their contemporaneous dataflow machine
(the "U-Interpreter <http://ieeexplore.ieee.org/document/1653940/>") to
perform the data-driven evaluations but neither they nor Reed were aware of
this until I pointed it out to them. Their "tagged architecture" had
"tags" that were pretty much the same as Reed's "names", to match up data
flow tokens.
**This "ease of forking reality" is, by the way, automatically
accomplished by non-determinism and is a reason people working on pure
functional operating systems find themselves going relational since
functions are degenerate relations that may be considered inconsistent
"forks" as required by the CAP theorem, among other realities.
--
You received this message because you are subscribed to the Google
Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to urbit-dev+***@googlegroups.com.
To post to this group, send email to urbit-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Curtis Yarvin
2017-09-26 23:25:10 UTC
Permalink
James, thanks for your interest! Unfortunately all your Slashdot
links are broken for me at the moment.

It is always a struggle for me to answer a question like "does Urbit
have X," because the question is always: at what layer? Lazy
evaluation is a perfect example -- we could have put lazy evaluation
in Nock or even in Hoon, but instead we left it as an exercise for the
reader.

There is truly nothing new under the sun in system software. But
there are styles and tendencies. One tendency I think we can agree
existed in early distributed-systems designs, where "early means" 70s
and 80s, maybe even a little 90s -- I think of systems like Amoeba and
Emerald -- is a tendency to construct complex, fine-grained system
semantics across nodes. Such as a distributed-dependency build
network, process migration, etc. I also think of Croquet as basically
in this tradition. Anything that makes you think of phrases like
"distributed garbage collection."

For better or worse, these systems did not prosper and were widely
outcompeted by the much cruder Unix. Unix in turn found a way to use
only one of its ports, in only one direction -- begetting the Web.
This retrogression needs to be explained, if we are thinking of going
back to the past (and why shouldn't we?).

The fact is that even Unix, and earlier/more ambitious systems even
more so, was very oriented toward trusted networks and/or LANs.
Nothing interesting became a successful service on the public wide
area. Only the stupid survived.

My background is definitely the Unix world, and I always want to push
semantics out of the core and up to the programmer. Making Urbit less
stupid is always a proposition I approach with great trepidation.

That said, one general principle I want to enforce (and that is
currently enforced, or at least aimed at patchily) is: any remote
resource you can read, you can also track/subscribe to. Preferably
with a subscription lease/timeout, so that we don't think of phrases
like "distributed garbage collection." Can this lead to effective
distributed dependency graphs? Maybe, but these graphs would be
emergent idioms that, from Arvo's perspective, would exist as patterns
of use and not system features.

Another way to say this is that Urbit is actually very Unixy in ways.
It exports a very simple rigorous interface to the network -- which in
Urbit at least is typed and signed. There is a one-way poke with
end-to-end acks, and a streaming read-only subscription. Even a
one-time read is a special case of a subscription. A poke has no
return value except for error notifications, nor does it even have an
endpoint name -- the content-type defines the endpoint. Really as
minimal a design as I think is usable. And each node, though defined
functionally, is a completely imperative state machine. And there is
no suggestion of anything like distributed data structures,
distributed computations, etc. "Simplicate and add more lightness."
Post by James Bowery
Yesterday I ran across Nock because I was thinking about doing a quick and
dirty VM based on an early '80s idea I had. I figured I could dispense with
all the complicated optimizations due to 23 doublings under Moore's law. I
posted about this, too informally, in /. in 2000 under "Network Functional
Programming". Before I started coding, I wanted to see if anyone had done
anything like it since, after all, it's been 35 years! So googling some
keywords led me to discover first Nock and then the stack which is intended
to solve the problem I had been hired to solve for AT&T and Knight-Ridder
way back then -- but that's another story.
I was quite excited to see the possibility that someone had finally solved
the problem, but when I discovered Nock didn't do lazy evaluation -- instead
leaving it as an option to higher levels -- I realized it wasn't going to do
what I had hoped.
Let me explain the basic idea in more detail because it isn't "just" lazy
evaluation, and I'm not sure -- even after doing the searches -- that anyone
has seriously researched it. And, just to preempt -- I'm not talking about
"reactive" or "functional reactive" programming as normally conceived
either.
Normally what people think of when they hear "lazy" or "demand driven" is
that there is some kind of output required and everything happens -- only --
in service of that observation, and once the output is generated, it's all
over until the next demand drives the evaluation again. The network
architecture I was targeting used dependency graphs required by David Reed's
NAMOS system* (eventually becoming TeaTime for Alan Kay's Croquet system)
for networked atomic actions, but the graphs were, along with pervasive
memoized values, built by lazy evaluation and remained in place, connecting
the inputs to the network to the continuously observed outputs of the
network. If one of the inputs change, the change propagates through the
dependency graphs as a data-driven or eager evaluation -- terminating any
propagation if an evaluation produced no change in its memoized output. If
an observation goes away, the reference counts are decremented through the
graphs along with the dependency links and at 0 references, the memoized
value is voided -- although not deleted as NAMOS was a write-once or single
assignment system (depending on one's paradigm) and the result is journaled
as it rolls off the system providing an audit trail and potential for
pseudo-time travel. If the eager evaluation propagates all the way to an
output and discovers it is inactive, it assumes there was a failure to
decrement the reference counters etc. by the removal of the observer and it
happens at that point.
Now, this may seem like way too much functionality to put into such a
primitive VM as Nock, but one must bear in mind that SK reduction machines
were widely considered to be a viable route for massively parallel dataflow
evaluation back in the halcyon days post-Backus's Turing Award lecture"".
Also bear in mind that Nock contains features that permit it to _formally_
fulfill the rigor of such horrors as Church numbers without sacrificing much
in the way of performance. You don't need to put all that machinery
everywhere -- but when you need it, which is almost all the time -- you
_really_ need it.
For example, one of the symptoms that one needs it is when one finds one has
to construct "make" like systems or "build" systems to maintain file system
version integrity. What you _really_ need is a generalized atomic action
system such as that provided by NAMOS, and a "release" of a "build" is just
the commit of an atomic action that propagates changes outside of the
temporary** "fork" in reality.
*"Naming and Synchronization In a Decentralized Computer System" -- an early
form of named data networking aka content-centric networking aka information
centric networking. It turns out that Arvind and Gostellow -- down one
story from Reed at the MIT LCS -- had used virtually isomorophic data
structures to Reed's NAMOS in their contemporaneous dataflow machine (the
"U-Interpreter") to perform the data-driven evaluations but neither they nor
Reed were aware of this until I pointed it out to them. Their "tagged
architecture" had "tags" that were pretty much the same as Reed's "names",
to match up data flow tokens.
**This "ease of forking reality" is, by the way, automatically accomplished
by non-determinism and is a reason people working on pure functional
operating systems find themselves going relational since functions are
degenerate relations that may be considered inconsistent "forks" as required
by the CAP theorem, among other realities.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to urbit-dev+***@googlegroups.com.
To post to this group, send email to urbit-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Curtis Yarvin
2017-09-27 00:45:03 UTC
Permalink
I put this at:

https://urbit.org/fora/posts/~2017.9.26..23.26.32..1066~/

Our fora works almost as if it was still 1976. But the patient and
gentle are rewarded.
Post by Curtis Yarvin
James, thanks for your interest! Unfortunately all your Slashdot
links are broken for me at the moment.
It is always a struggle for me to answer a question like "does Urbit
have X," because the question is always: at what layer? Lazy
evaluation is a perfect example -- we could have put lazy evaluation
in Nock or even in Hoon, but instead we left it as an exercise for the
reader.
There is truly nothing new under the sun in system software. But
there are styles and tendencies. One tendency I think we can agree
existed in early distributed-systems designs, where "early means" 70s
and 80s, maybe even a little 90s -- I think of systems like Amoeba and
Emerald -- is a tendency to construct complex, fine-grained system
semantics across nodes. Such as a distributed-dependency build
network, process migration, etc. I also think of Croquet as basically
in this tradition. Anything that makes you think of phrases like
"distributed garbage collection."
For better or worse, these systems did not prosper and were widely
outcompeted by the much cruder Unix. Unix in turn found a way to use
only one of its ports, in only one direction -- begetting the Web.
This retrogression needs to be explained, if we are thinking of going
back to the past (and why shouldn't we?).
The fact is that even Unix, and earlier/more ambitious systems even
more so, was very oriented toward trusted networks and/or LANs.
Nothing interesting became a successful service on the public wide
area. Only the stupid survived.
My background is definitely the Unix world, and I always want to push
semantics out of the core and up to the programmer. Making Urbit less
stupid is always a proposition I approach with great trepidation.
That said, one general principle I want to enforce (and that is
currently enforced, or at least aimed at patchily) is: any remote
resource you can read, you can also track/subscribe to. Preferably
with a subscription lease/timeout, so that we don't think of phrases
like "distributed garbage collection." Can this lead to effective
distributed dependency graphs? Maybe, but these graphs would be
emergent idioms that, from Arvo's perspective, would exist as patterns
of use and not system features.
Another way to say this is that Urbit is actually very Unixy in ways.
It exports a very simple rigorous interface to the network -- which in
Urbit at least is typed and signed. There is a one-way poke with
end-to-end acks, and a streaming read-only subscription. Even a
one-time read is a special case of a subscription. A poke has no
return value except for error notifications, nor does it even have an
endpoint name -- the content-type defines the endpoint. Really as
minimal a design as I think is usable. And each node, though defined
functionally, is a completely imperative state machine. And there is
no suggestion of anything like distributed data structures,
distributed computations, etc. "Simplicate and add more lightness."
Post by James Bowery
Yesterday I ran across Nock because I was thinking about doing a quick and
dirty VM based on an early '80s idea I had. I figured I could dispense with
all the complicated optimizations due to 23 doublings under Moore's law. I
posted about this, too informally, in /. in 2000 under "Network Functional
Programming". Before I started coding, I wanted to see if anyone had done
anything like it since, after all, it's been 35 years! So googling some
keywords led me to discover first Nock and then the stack which is intended
to solve the problem I had been hired to solve for AT&T and Knight-Ridder
way back then -- but that's another story.
I was quite excited to see the possibility that someone had finally solved
the problem, but when I discovered Nock didn't do lazy evaluation -- instead
leaving it as an option to higher levels -- I realized it wasn't going to do
what I had hoped.
Let me explain the basic idea in more detail because it isn't "just" lazy
evaluation, and I'm not sure -- even after doing the searches -- that anyone
has seriously researched it. And, just to preempt -- I'm not talking about
"reactive" or "functional reactive" programming as normally conceived
either.
Normally what people think of when they hear "lazy" or "demand driven" is
that there is some kind of output required and everything happens -- only --
in service of that observation, and once the output is generated, it's all
over until the next demand drives the evaluation again. The network
architecture I was targeting used dependency graphs required by David Reed's
NAMOS system* (eventually becoming TeaTime for Alan Kay's Croquet system)
for networked atomic actions, but the graphs were, along with pervasive
memoized values, built by lazy evaluation and remained in place, connecting
the inputs to the network to the continuously observed outputs of the
network. If one of the inputs change, the change propagates through the
dependency graphs as a data-driven or eager evaluation -- terminating any
propagation if an evaluation produced no change in its memoized output. If
an observation goes away, the reference counts are decremented through the
graphs along with the dependency links and at 0 references, the memoized
value is voided -- although not deleted as NAMOS was a write-once or single
assignment system (depending on one's paradigm) and the result is journaled
as it rolls off the system providing an audit trail and potential for
pseudo-time travel. If the eager evaluation propagates all the way to an
output and discovers it is inactive, it assumes there was a failure to
decrement the reference counters etc. by the removal of the observer and it
happens at that point.
Now, this may seem like way too much functionality to put into such a
primitive VM as Nock, but one must bear in mind that SK reduction machines
were widely considered to be a viable route for massively parallel dataflow
evaluation back in the halcyon days post-Backus's Turing Award lecture"".
Also bear in mind that Nock contains features that permit it to _formally_
fulfill the rigor of such horrors as Church numbers without sacrificing much
in the way of performance. You don't need to put all that machinery
everywhere -- but when you need it, which is almost all the time -- you
_really_ need it.
For example, one of the symptoms that one needs it is when one finds one has
to construct "make" like systems or "build" systems to maintain file system
version integrity. What you _really_ need is a generalized atomic action
system such as that provided by NAMOS, and a "release" of a "build" is just
the commit of an atomic action that propagates changes outside of the
temporary** "fork" in reality.
*"Naming and Synchronization In a Decentralized Computer System" -- an early
form of named data networking aka content-centric networking aka information
centric networking. It turns out that Arvind and Gostellow -- down one
story from Reed at the MIT LCS -- had used virtually isomorophic data
structures to Reed's NAMOS in their contemporaneous dataflow machine (the
"U-Interpreter") to perform the data-driven evaluations but neither they nor
Reed were aware of this until I pointed it out to them. Their "tagged
architecture" had "tags" that were pretty much the same as Reed's "names",
to match up data flow tokens.
**This "ease of forking reality" is, by the way, automatically accomplished
by non-determinism and is a reason people working on pure functional
operating systems find themselves going relational since functions are
degenerate relations that may be considered inconsistent "forks" as required
by the CAP theorem, among other realities.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups "urbit" group.
To unsubscribe from this group and stop receiving emails from it, send an email to urbit-dev+***@googlegroups.com.
To post to this group, send email to urbit-***@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Loading...