Symbolics Lisp Machine demo Jan 2013 by kalman reti notes

Posted on 2024-02-27 10:10 lisp

last in the series of guru narrations by kalman reti on genera use. he rehashes a lot of subjects he previously covered, but the core of video is loading an image and then manipulating it through indirect arrays (more...)

Genera emulator demo of smooth scrolling by kalman reti notes

Posted on 10:03 lisp

this is another guru narration by kalman repi of lisp machine use, this time the goal is produce a cute hack to autoscroll a sheet of music (more...)

SymbolicsTalk28June2012 by kalman reti notes

Posted on 09:51 lisp

Kalman Reti, the last symbolics engineer, did a series of video presentations on using genera. I had those videos laying on my desktop to watch carefully, and extract whatever useful knowledge. There's no system to the notes, it's mostly shortcuts, or snippets of code, of interest only to me. Some things I know and use already, but some were useful first exposure. (more...)

usim debug

Posted on 2024-02-24 15:29 notes

Random usim debug videos I had on my desktop. They are not of interest to anyone, but they document the debugging and fixing of the mouse issues on System 99, that we worked on with ams. (more...)


Posted on 14:19 notes

Some time ago I was experimenting with iPhone's LIDAR/TrueDepth technology, which got me this figurine scan. It's been sitting on my desktop, I figured why not try and figure out how to post it to blog. Here it is in all it's janky glory, (more...)

Splitting the primary and the sub key in a gnupg keychain

Posted on 2018-11-12 23:52 technical

A month ago my key has expired again, so I spent some time exploring subkey behavior in gnupg. This is an experimental procedure for converting a typical primary/sub key pair into a pair of independent keys. I don't have practical experience with all the different implications of the switch, so proceed at your own risk.

First and most important point is that contrary to my previous assumption pgp can work with a single, primary key.

Default key generator though forces you into a particular set of ancient assumptions, that also gave us keychain databases, trust levels, signing parties and other such rituals, that seem to have been designed in vacuum, and failed to gain standing in the real world. A key can be used for Signing and Encryption, but also Certification, that is the signing of subkeys in a keychain. What the key is allowed to do is stored in the key's usage flags, and is enforced by gnupg. The usage is split between primary key and any number of subkeys. By default the primary key is allowed to Sign and Certify, the subkey in turn can only Encrypt. Gnupg gives preference to subkeys, so if e.g. a subkey has a Sign flag, it will be used for signing instead of the primary key.

Here's an example of a typical primary/sub pair, that you get with gen-key:

% gpg --homedir keys --edit-key 60D55B29
Secret key is available.

pub  2048R/60D55B29  created: 2018-10-12  expires: never       usage: SC
                     trust: ultimate      validity: ultimate
sub  2048R/4FC43EBB  created: 2018-10-12  expires: never       usage: E
[ultimate] (1). test key

The goal is to remove the subkey, and allow the primary key to also encrypt. You can delete the subkey using edit-key interface, but the procedure for editing usage flags is more elaborate. Unfortunately I haven't found a way to edit key's usage flags after the key has been generated, but there's a way to surgically split the keys and give them whatever options you need. Frankly I'm surprised there isn't an easier way!1

A pgp key consists of a series of packets,

% gpg --homedir keys --export-secret-key | pgpdump |grep Packet
Old: Secret Key Packet(tag 5)(920 bytes)
Old: User ID Packet(tag 13)(16 bytes)
Old: Signature Packet(tag 2)(312 bytes)
Old: Secret Subkey Packet(tag 7)(920 bytes)
Old: Signature Packet(tag 2)(287 bytes)

Secret key/subkey packets contain the necessary values to do RSA computation and nothing else. The only difference between a key and a subkey is the packet tag.

Old: Secret Key Packet(tag 5)(920 bytes)
        Ver 4 - new
        Public key creation time - Fri Oct 12 00:29:24 EDT 2018
        Pub alg - RSA Encrypt or Sign(pub 1)
        RSA n(2048 bits) - ...
        RSA e(17 bits) - ...
        RSA d(2044 bits) - ...
        RSA p(1024 bits) - ...
        RSA q(1024 bits) - ...
        RSA u(1023 bits) - ...
        Checksum - 3f b4

A user id packet is a string value that's assembled out of the relevant gen-key questions,

Old: User ID Packet(tag 13)(16 bytes)
        User ID - test key

A signature packet is the bucket full of Everything Else,

Old: Signature Packet(tag 2)(312 bytes)
        Ver 4 - new
        Sig type - Positive certification of a User ID and Public Key packet(0x13).
        Pub alg - RSA Encrypt or Sign(pub 1)
        Hash alg - SHA1(hash 2)
        Hashed Sub: signature creation time(sub 2)(4 bytes)
                Time - Fri Oct 12 00:29:24 EDT 2018
        Hashed Sub: key flags(sub 27)(1 bytes)
                Flag - This key may be used to certify other keys
                Flag - This key may be used to sign data
        Hashed Sub: preferred symmetric algorithms(sub 11)(5 bytes)
                Sym alg - AES with 256-bit key(sym 9)
                Sym alg - AES with 192-bit key(sym 8)
                Sym alg - AES with 128-bit key(sym 7)
                Sym alg - CAST5(sym 3)
                Sym alg - Triple-DES(sym 2)
        Hashed Sub: preferred hash algorithms(sub 21)(5 bytes)
                Hash alg - SHA256(hash 8)
                Hash alg - SHA1(hash 2)
                Hash alg - SHA384(hash 9)
                Hash alg - SHA512(hash 10)
                Hash alg - SHA224(hash 11)
        Hashed Sub: preferred compression algorithms(sub 22)(3 bytes)
                Comp alg - ZLIB (comp 2)
                Comp alg - BZip2(comp 3)
                Comp alg - ZIP (comp 1)
        Hashed Sub: features(sub 30)(1 bytes)
                Flag - Modification detection (packets 18 and 19)
        Hashed Sub: key server preferences(sub 23)(1 bytes)
                Flag - No-modify
        Sub: issuer key ID(sub 16)(8 bytes)
                Key ID - 0xC1E46ED560D55B29
        Hash left 2 bytes - 1f 27
        RSA m^d mod n(2048 bits) - ...
                -> PKCS-1

It contains and signs various dates, flags, and preferences. Of interest are the "key flags". Now one could write a pgp packet editor, that will patch the relevant bits in, and I suspect that's something that asciilifeform might have on his workbench already, but there exists a somewhat odd way of breaking the keys apart and then reassembling them from bits.

We will need a signature with the right key flags enabled. I said that gnupg gives you a primary/sub key pair by default, but with an --expert flag you can make it produce whatever key configuration you want, and that should really be the recommended way of making new keys in the republic.

% mkdir tmp
% gpg --homedir tmp --gen-key --expert
Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
   (7) DSA (set your own capabilities)
   (8) RSA (set your own capabilities)
Your selection? 8

Possible actions for a RSA key: Sign Certify Encrypt Authenticate
Current allowed actions: Sign Certify Encrypt

   (S) Toggle the sign capability
   (E) Toggle the encrypt capability
   (A) Toggle the authenticate capability
   (Q) Finished

Your selection? q
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 1024

Now we split the two key sets we have with a gpgsplit utility that's part of gnupg,

% gpg --homedir keys --export-secret-key | gpgsplit -p k
% gpg --homedir tmp --export-secret-key | gpgsplit -p tmp
% ls
k000001-005.secret_key    k000005-002.sig           tmp000002-013.user_id
k000002-013.user_id       keys                      tmp000003-002.sig
k000003-002.sig           tmp
k000004-007.secret_subkey tmp000001-005.secret_key

which gives us all the packets in separate files. The ones prefixed with k- are the original keys (there's 5 files that correspond to 5 packets from pgpdump above), the tmp- ones are the temp key we just generated.

An arbitrary key combination can now be assembled out of the individual packets, but we use original primary key, original user id and the signature from the temp key.

% cat k000001-005.secret_key k000002-013.user_id tmp000003-002.sig > key

A straight import of the resulting key will fail, because of the invalid signature, but passing an --allow-non-selfsigned-uid will bypass the signature verification, while still applying whatever preferences are stored in signature packet,

% gpg --homedir tmp1 --allow-non-selfsigned-uid  --import key
gpg: WARNING: unsafe permissions on homedir `tmp1'
gpg: keyring `tmp1/secring.gpg' created
gpg: keyring `tmp1/pubring.gpg' created
gpg: key 60D55B29: secret key imported
gpg: key 60D55B29: accepted non self-signed user ID "test key"
gpg: tmp1/trustdb.gpg: trustdb created
gpg: key 60D55B29: public key "test key" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
gpg:       secret keys read: 1
gpg:   secret keys imported: 1
gpg: no ultimately trusted keys found

Finally in order to fix the key's internal consistency we need to delete the bogus signature and re-sign the identity,

% gpg --homedir tmp1 --allow-non-selfsigned-uid --edit-key 60D55B29
Secret key is available.

pub  2048R/60D55B29  created: 2018-10-12  expires: never       usage: SCEA
                     trust: unknown       validity: unknown
[ unknown] (1). test key

gpg> uid 1

pub  2048R/60D55B29  created: 2018-10-12  expires: never       usage: SCEA
                     trust: unknown       validity: unknown
[ unknown] (1)* test key

gpg> delsig
uid  test key
sig?3        4FEE77E7 2018-11-13
Delete this unknown signature? (y/N/q)y
Deleted 1 signature.

gpg> sign

pub  2048R/60D55B29  created: 2018-10-12  expires: never       usage: SCEA
                     trust: unknown       validity: unknown
 Primary key fingerprint: 454C 1EFC A29D 02A0 E6CE  3A47 C1E4 6ED5 60D5 5B29

     royal astronomer

Are you sure that you want to sign this key with your
key "test key" (60D55B29)

This will be a self-signature.

Really sign? (y/N) y

gpg> save

The entire ungodly procedure gives us the original primary key 60D55B29 with all the usage flags enabled. The procedure can be repeated with the subkey, and is left as an exercise for the reader. It requires first patching k000004-007.secret_subkey subkey's first byte to 149 to switch it from Secret Subkey Packet(tag 7) to Secret Key Packet(tag 5).2

Now one can distribute the primary key as the canonical key, and keep the newly elevated subkey around to decrypt messages from the correspondents who are using an old pubkey.

  1. I believe gpg2 allows you to edit usage flags, but since I don't have it anywhere on my machines I can't confirm it. []
  2. I've used the following, trivial Ada tool,
    with Ada.Direct_IO;
    with Ada.Command_Line; use Ada.Command_Line;
    procedure P is
       type Byte     is mod 2**8;
       package Io is new Ada.Direct_IO(Byte); use Io;
       F: Io.File_Type;
       B: Byte;
       Position: Positive_Count;
       Wrong_File: exception;
       Open( F, Inout_File, Argument(1) );
       Position := Positive_Count'Value( Argument(2) );
       B := Byte'Value( Argument(3) );
       Write( F, B, Position );
       Close( F );

    which can be used thus,

    % gprbuild p.adb
    % ./p 000004-007.secret_subkey 1 149
    % pgpdump 000004-007.secret_subkey | grep Packet
    Old: Secret Key Packet(tag 5)(920 bytes [] updated for vtools

Posted on 2018-09-30 23:09 technical, vtools

As has been discussed in the logs, vtools doesn't stand on its own as a V implementation. Instead it's a collection of tools for working with vpatches. V authors can use vtools so as to not rely on often brittle GNU utilities.

On my own workbench I've been using a patched up version of original asciilifeform's, where I replaced a call to GNU patch with one to vpatch. The replacement is essentially a drop in1, with the advantage of being much stricter about the vpatches that are accepted and also making sure that the press hashes are valid.

I have barely touched otherwise2, so I consider this a proof of concept release. It consists of two patches, the original version 99 genesis3 and my own modifications.

Unless you have a working keccak v, you'll need to bootstrap manually. Assuming you have a working vtools build,

PATH=path to vtools:$PATH
mkdir {wot,seals,patches}
curl --silent -o wot/phf.asc
gpg --import wot/phf.asc
curl --silent -o patches/v99.vpatch
curl --silent -o patches/v98.vpatch
curl --silent -o seals/v99.vpatch.phf.sig
curl --silent -o seals/v98.vpatch.phf.sig
gpg --verify seals/v99.vpatch.phf.sig patches/v99.vpatch
gpg --verify seals/v98.vpatch.phf.sig patches/v98.vpatch
cat patches/v99.vpatch patches/v98.vpatch | vpatch
pip install python-gnupg
chmod +x v/
./v/ --wot ./wot -fingers --seals ./seals ./patches p ./patches/v98.vpatch v_press

You now have a self-pressed in the v_press directory!

Some things to note: the bulk of bootstrapping effort is verifying the patch signatures, something that v does for you. On the other hand you can just cat any number of patches into vpatch utility and it will produce a verified press. Asciilifeform's uses stock python, but it does depend on python-gnupg package, which can be installed through pip or whatever global packaging system (on gentoo it's emerge python-gnupg).

  1. right now vpatch doesn't support target directory, and presses into the current directory, so I had to do some changes to accommodate's concept of destination. i also hardened the call out to an external process, though perhaps unnecessarily. []
  2. I have also added support for subkeys. []
  3. the first release of is actually version 100, but the diff between 100 and 99 is in my opinion entirely cosmetic, so I avoided pedantically reconstructing the entire chain, and started with a canonical version of []

vtools complete keccak prerelease

Posted on 2018-04-07 20:20 notes, technical, vtools

I'm going to call this post a vtools pre-release. I'm deferring the proper release write up till Wednesday, but meanwhile the relevant release work has been done, and it's good time to point interested parties to the bits so that further log discussion can happen. I doubt that my write ups stand on their own, that is without also close following of the going ons in the logs, but this post is particularly so only of interest to specific people.

I've reground the project around manifest file. From previous conversations, it seems like the format is mostly inconsequential, so I'm using <date> <nick>\n<message>. An example of the manifest file press, you can see the implicit press order in btcbase annotation, and how that manifest change looks in a vpatch file. In the process I've discovered that btcbase presser isn't working quite right, so at the moment /tree/ shouldn't be relied on for exploration of the press.1

Keccak vdiff /vpatch are now at feature parity with the existing shell based tooling, specifically vpatch now supports no newline directive. We're going to start working with a complete round trip in mp-wp, which is going to be keccak only release. I would still like to make vpatch work with SHA-512 though.

Current complete patchset, with vtools_vpatch_newline keccak and vdiff_sha_static SHA-512 heads,
  1. e.g. vtools_vpatch_newline's manifest contains extra two entries at the end, which is a result of a buggy press rather than the contents of relevant vpatches []

vtools vpatch

Posted on 2018-03-22 15:00 technical, vtools

I just wrapped up busy three weeks worth of a family trip and two back to back conferences. I completely forgot how exhausting conferences are, and how little time they leave for anything else. There's a short backlog of posts, that I'm going to publish once I'm back home, but now that I had a chance to recover a bit, I'm going to release what I managed to work on during my travels.

I present for your consideration a proof of concept release of a vpatcher, that can press keccak patches. This implementation was modeled on vpatch parser/press that's used in btcbase to render patches and more importantly to produce an in-memory press. While the general architecture was chosen upfront, this implementation was authored incrementally, a development style that is surprisingly well supported by Ada1, so the bulk of functionality is contained in a single file. At this point I'm not convinced that some kind of split is required, though splitting it in the style of ascii's FFI might bring some clarity.

Vpatch is essentially standard unix patch tool, that supports strict application of a unified diff and verification of V hashes. It takes a patch in a standard input stream and attempts to press whatever content into the current directory. Existing files are patched, new files are created, old files are removed. Processing is done one file at a time, and the operation terminates with an error when an issue is encountered. Patcher keeps a temporary file around for the result output, which gets moved in place once the file's hash has been verified. This means that atomicity is preserved at a file level, but not at the patch level and failed press results in an inconsistent state of the directory. Individual files are always either in the previous or new state, which means that you get to inspect the offending file, but you have to fully redo the press on failure. This is a decision that I might have to reconsider, at the expense of increased complexity. Right now very little is kept in memory: information about the current file being patched, the current hunk and whatever simple data used to track the state.

To build the patcher use make vpatch, or call grpbuild on vpatch.gpr directly. To test the patcher, I reground trb stable release. Since the patcher doesn't verify sigs, I haven't signed the regrind, and it's provided for testing purposes only.

Press the genesis,

% vpatch < ps/1.vpatch
creating bitcoin/.gitignore
creating bitcoin/COPYING
creating bitcoin/src/wallet.cpp
creating bitcoin/src/wallet.h

Apply the next patch on top of it,

% vpatch < ps/2.vpatch
patching bitcoin/src/bitcoinrpc.cpp
patching bitcoin/src/db.cpp
patching bitcoin/src/headers.h
patching bitcoin/src/init.cpp
deleting bitcoin/src/qtui.h
patching bitcoin/src/util.h
patching bitcoin/src/wallet.cpp

If we now try to rerun genesis in the same folder we get,

% vpatch < ps/1.vpatch
creating bitcoin/.gitignore

raised VPATCH.STATE : attempt to create a file, but file already exists

Likewise attempt to reapply second patch results in failure, since whatever files have invalid hash2

% vpatch < ps/2.vpatch
patching bitcoin/src/bitcoinrpc.cpp

raised VPATCH.STATE : from hash doesn't match

Same will happen if we attempt to apply a significantly later patch, since the necessary intermediate patches are missing,

% vpatch < ps/12.vpatch
patching bitcoin/src/main.cpp

raised VPATCH.STATE : from hash doesn't match

Finally applying the correct patch succeeds,

% vpatch < ps/3.vpatch
patching bitcoin/src/db.cpp
patching bitcoin/src/init.cpp
patching bitcoin/src/main.cpp
patching bitcoin/src/main.h
patching bitcoin/src/makefile.linux-mingw
patching bitcoin/src/makefile.unix
patching bitcoin/src/net.cpp

Supporting pipe streaming means that we can start vpatch and incrementally feed it patches, moving the tree towards the press top. (In this case patches that we're pressing are named in order from 1 to 27.)

% cat ps/{1..27}.vpatch | vpatch
creating bitcoin/.gitignore
creating bitcoin/COPYING
creating bitcoin/deps/Makefile
creating bitcoin/deps/Manifest.sha512
patching bitcoin/src/db.cpp
patching bitcoin/src/init.cpp
patching bitcoin/src/main.cpp
creating bitcoin/

The press can be sanity checked using e.g. checksum file from btcbase, but obviously the tool itself does both input and output hash verification as it goes.

There are some known issues, the biggest one is that \ No newline at end of file doesn't work yet, and the patcher fails when it encounters the dirrective. Half way through development I discovered that Text_IO is idiosyncratic: there's no machinery to produce a line without a newline at the end, or to figure out whether or not existing file has one.3 Ada always outputs a valid text file and there's no way to avoid it with Text_IO. Future developers beware! A solution to this problem is to use Sequential_IO specialized on Character, but that means writing own high level procedures like Get_Line. In the work in progress modification that uses Sequential_IO I was able to build a drop in replacement for Text_IO with minimum of changes to existing code, by gradually adding the missing functionality.

To understand the way this patcher works it's helpful to have some idea about the diff format, that we're using. There's three data structures that I use to keep track of patch data. The header,

   type Header (From_L, To_L: Natural) Is record
      From_Hash: Hash;
      From_File: String(1..From_L);
      To_Hash: Hash;
      To_File: String(1..To_L);
   end record;

which holds the source and the destination file information and corresponds to

diff -uNr a/bitcoin/.gitignore b/bitcoin/.gitignore
--- a/bitcoin/.gitignore false
+++ b/bitcoin/.gitignore 6654c7489c311585d7d3...


A hash is a variant record type that can either hold a specific value or when there's no hash, indicated by false label, it's explicitly marked empty, which happens when the file is either created or removed, with an empty from and to respectively.

   type Hash_Type is (Empty, Value);
   type Hash(The_Type: Hash_Type := Empty) is record
      case The_Type is
         when Value =>
            Value: String(1..Hash_Length);
         when Empty =>
      end case;
   end record;

We distinguish between three possible file operations,

   type Patch_Op is (Op_Create, Op_Delete, Op_Patch);

which depend on the presence of hashes. If we only have input hash, that means that the file has been deleted, likewise only output hash the file is being newly created. Both hashes indicate that the file is being patched.

Each header is followed by one or more "hunks", a line count prelude followed by the line related commands,

   type Line_Numbers is record
      Start: Natural;
      Count: Natural;
   end record;

   type Hunk is record
      From_File_Line_Numbers: Line_Numbers;
      To_File_Line_Numbers: Line_Numbers;
   end record;

A hunk holds the actual change details that is a sequence of optional context lines which we use for sanity checking ("patch claims line foo does input file actually have foo in that line"), followed by some number of additions or deletions, followed by optional context lines. The line commands are not actually stored in memory, instead they are processed as they are encountered. A typical hunk looks like this,

@@ -435,7 +432,7 @@
         BOOST_FOREACH(string strAddr, mapMultiArgs["-addnode"])
-            CAddress addr(strAddr, fAllowDNS);
+            CAddress addr(strAddr);
             addr.nTime = 0; // so it won't relay unless successfully connected
             if (addr.IsValid())

Only the line in bold is kept in memory. Each record has a corresponding Get procedure which reads the record from input stream. This way you can say, e.g. Get(A_Hunk) and that'll read the @@ -435,7 +432,7 @@ line from input stream.

The parser is naive in that it looks at the stream one character at a time, and dispatches to various handlers, rarelly needing to read the whole of something, before making a decision. This is a traditional lisp parsing technique, which is also well supported in Ada. The bulk of work happens inside an unreasonably large Process_Hunks_For_Header procedure. I will eventually attempt to refactor it, but right now it does all the pre and post checks, parses the hunk body and performs relevant modifications. It relies on record Gets to parse the input patch. The are two loops in the body, Hunk_Loop, which handles each hunk under the header, and Hunk_Body_Loop, which actually handles individual line changes within a hunk. The core of hunk body loop is a dispatch on the first character in the line,

            exit Hunk_Body_Loop when From_Count = 0 and To_Count = 0;
            Look_Ahead(C, EOL);
            if EOL then
               raise Parse with "blank line in hunk";
            end if;
            case C is
               when '+' => -- line added
               when '-' => -- line deleted
               when ' ' => -- line stays the same
               when others =>
                  raise Parse with "unexpected character "
            end case;

Attentive reader will note that we exit the loop based exclusively on the expected line count, which is the information communicated in the hunk @@ prelude.

There are some obvious improvements for future releases, the aformentioned newline at end of file issue. I'd also like to port this implementation to SHA 512 branch, to allow testing in the current workflow. The SHA port particularly will let me test Ada to C interoperability. Going back to the Wednesday schedule, I will address one of these in the next release.

  1. I'm more and more impressed with Ada as a language, unfortunately after extensive use I've ran into various issues with core library, which generally left a poor impression. A lot of the decision very much leave the impression of []
  2. Vpatch is a streaming tool, and none of the files are read twice. So hashing is happening online, and we hash AND attempt to patch in parellel. If patching attempt fails (because patch information doesn't match file's contents), we complete hashing and report either of the errors. Hashing has higher priority, but if the hash is valid we'll report patching error instead. []
  3. End_Of_File also has an odd behavior in that it will report True, if there's an end of file OR if there's a newline followed by end of file. This means that if you're using traditional "while not eof" loop, you're going to lose last newline. []

vtools C interop, other fixes

Posted on 2018-03-08 02:15 technical, vtools

The original plan to get vpatch released this week fell through, instead there's more bug fixes.

The exercise of trimming GNU diff left a bad taste in my mouth, the end result, while significantly reduced and thus easier to study, is still a significant chunk of "clever" C code clocking at 3383 lines. But the exercise was worthwhile1 in that it allowed me to explicitly preserve diff's quirks when it comes to hunk construction, in order to be able to replicate existing vpatches.

Same consideration doesn't apply to a patcher, since a patcher is entirely dumb machinery, a kind of player piano, executing instructions from pre-recorded tape. As I've been enjoying my brief foray into Ada programming, thrust as it was on me by the republic, I decided to stick to the environment and use it to implement the patcher also. There is some rational reasons for using Ada for patcher instead of C. Where differ can afford to be sloppy in operation-- an operator can identify issues by reading the patch-- a press absolutely must result in a tree of files claimed by the press chain, or fail explicitly.

Current version of ada patcher was modeled on btcbase's internal Lisp implementation, and at 490 lines, can successfully press trb's "stable" branch. Unfortunately in the process of testing the patcher I've discovered another bug in the current keccak differ, that ate up the rest of my allocated time.

The possibility of that bug was hinted at in the recent conversation with diana_coman on the subject of Ada and C interoperability. Vtools interfaces to SMG's Keccak has two functions that among other things transfer arrays of characters between diff's C code and Keccak's Ada, C_Hash and C_End. The first takes in the text of the files under the consideration, in chunks, and the later sends back the final hash value, encoded as an array of bits, and on the C side represented as a string. Last week's vdiff uses Interfaces.C. Char_Array type to point to a shared char buffer, and standard functions Interfaces.C. To_Ada / Interfaces.C. To_C to convert the data between languages.

Well, in our conversation, Diana mentioned that in her experiments with To_Ada it sometime stoped too soon and failed to copy the entire contents of buffer. My reaction at the time was essentially "well, works for me"2, except now it doesn't, and on the most recent pass the issue came back with vengeance, almost entirely failing to transfer the data3. The test environment ostensibly stayed the same, so I'm completely mystified by the behavior. I'll attempt a dive into Ada's code to at least understand what's going on, but meanwhile, I rewrote the offending bits using yet another C to Ada copy method.

This is not the approach that diana_coman took in her code, which still uses Interfaces.C. Char_Array for data type, but instead of using To_C/Ada her functions explicitly walk the buffer in a loop, copying each character one by one. I instead listened to the siren's call of Ada's documentation4 and went with Interfaces.C. Pointers, which even provides a helpful example of a roundtrip at the end of the section. The details of the implementation can be seen in the patch, but they follow almost directly the blueprint in the documentation. The approach is similar to what diana_coman does, in that characters are copied in an explicit loop, but instead of Char_Array I'm now using a pointer abstraction, which mimics C's behavior and requires explicit advancement.

The method that I've implemented turned out to have already been condemned by ave1. Apparently he wrote what looks like an extensive investigation into dealing with char array. All in all much back to the drawing board.

The patch also includes a backport of bounds error fix for smg_keccak, documented in extensive detail on diana_coman's blog.

  1. besides, that is, personal educational value []
  2. clearly there's a need for a better stress testing environment on my part []
  3. although when it comes to hashing, "almost" entirely is the proverbial "shit soup" []
  4. standard!!1 []
Older Posts »