Last Sunday, I walked into a restaurant thinking about how to embed a watch screen in the back of your hand. Specifically, how to inject yourself with luminescent ink and light up different areas of ink by sending currents through your skin.
I imagined that they could be powered from beneath the skin by small beads with little oscillators in them – that would amplify a signal at the proper frequency. That wasn’t what I was focused on at the time. But I immediately realized that if the signals were interpreted so loosely, even the bars that were supposed to be dark might be dimly lit by the noise of all the others.
My goal in this project is to reduce the maximum number of simultaneous signals to display any digit. The signals will be the power source, and no logic gates of any kind can be used.
Now the simplest design would be one in which there was a frequency for each bar of each digit, totaling 28 for the entire watch face. assume the colon has no signal.
_ _ _ _ |_| |_| * |_| |_| |_| |_| * |_| |_|
I immediately started looking for repeatedly used selections of bars. I could then give each of those bars a detector for the signal that lights up the group, alongside their detectors which tell them to light up individually.
In my first experiment, I isolated groups of:
-the two on the right (used in 0, 1, 3, 4, 7, 8, and 9)
-the two on the left (used in 0, 6, 8, and 9)
-the three in the middle (used in 2, 3, 5, 6, 8, and 9)
basis: _ _ _ _ _ _ _ _
: | | | _| _| |_| |_ |_ | |_| |_|
: |_| | |_ _| | _| |_| | |_| _|
_ : _ _ _ _ _ _ _
|_ : | _| _ |_ | |_ |_ |_
|_ : |_ |_ _ | |_| |_ _
_ : _ _ _ _ _ _ _
_| : _| _ |_ | _ _ |_
_| : _ |_ _ | _| _ _
: _ _
| | : | |_ | |
| | : _ | | |
bottom implies top
thus
: _
: | |_ | |
: _ | | |
: 3 1 2 2 3 3 3 2 3 3
max of 3 signals active at once
2 detectors per bar
10 signal types total
sum of 25 signals to display all digits
The shapes on the left indicate which part has been isolated.
The numbers then summarize how many signals were necessary to create the digit. They are the sum of the number of group-signals used and the number of individual bars that remain.
Then I noticed several places where the bottom left and top right / top left and bottom right were being used together, and I tried it again, this time starting out with those groups.
basis: _ _ _ _ _ _ _ _
: | | | _| _| |_| |_ |_ | |_| |_|
: |_| | |_ _| | _| |_| | |_| _|
_ : _ _ _ _ _ _ _ _
_| : | | _| _| _| _ _ | _| _|
|_ : |_ | |_ _| _ |_ | |_ _
_ : _ _ _ _ _ _ _ _
|_ : | _ _| _| _ _ | _ _|
_| : _ | _ _| _ |_ | _ _
: _ _
| | : | | _| | |
| | : _ | | | |
_ : _ _
|_ : _| |
|_ : _ |
bottom implies top
thus
: _
: _| |
: _ |
: 3 1 2 2 3 2 3 2 3 3
max of 3 signals active at once
2 or 3 detectors per bar
10 signal types total
sum of 24 signals to display all digits
This time I left out the group of the left two, because they weren’t necessary. I still have 10 unique frequencies, because while I added two new groups, there are now two bars that I never need to address individually. But I only managed to reduce the number of signals for the digit 5, and I added an extra detector to all of the vertical bars. My goal is still to reduce the charge in bars that aren’t supposed to be glowing, and increasing the number of frequencies they’re listening for is going to do more harm than good.
I wanted to start over and get an even better outcome, so I sectioned off part of my text document and started making observations.
=============================
_ _ _ _ _ _ _ _
| | | _| _| |_| |_ |_ | |_| |_|
|_| | |_ _| | _| |_| | |_| _|
notes:
top left implies middle whenever not bottom left
bottom implies top always
top implies bottom except for in 7
bottom left implies top right except (when top left : once, in 6).
top left implies bottom left except (when bottom right : thrice, in 4, 5, and 9).
=============================
I don’t know why it took me so long… but after watching myself make a list of simple observations, I finally realized that I needed to write an algorithm. probably in python.
I haven’t written that algorithm, because I suddenly became very distracted. could this be used to compress things?
*Gasp*
It is identifying samples by their common traits. I can probably do that with sound. The first common trait between samples of audio that pops into my head is their closeness to the average of the last n samples. Instead of identifying a 16 bit number with 16 bits, it could be identified by 4 nibbles with unique purposes:
-The first represents the closeness of the sample to the average of the last 256 samples, i.e. the weight of that average in a mean which determines the sample.
-The second represents the closeness of the sample to the average of the last 1024 samples.
-The third represents the closeness of the sample to the average of the last 4096 samples.
-The fourth represents an offset, in case the first 3 can’t come close to representing the new sample.
Obviously, this set of specialized nibbles is just as large as the 16 bit integer they represent. But they don’t need to be saved. They could just be incremented and decremented, maybe even in turn. In my mind this could result in 2 different systems:
-The simple one, which multiplies the 3 averages by their respective nibbles, adds them together, divides by 48, and adds the offset.
-One in which the decoder is very smart. It rounds numbers intelligently to make sure that every change has an immediate result, and that almost any sample can suddenly be represented even if it doesn’t follow the pattern. For instance, if the 4096-sample average was very close to zero, giving it a weight between 0 and 15 would barely impact what the sample was calculated to be. In this case the nibble representing the weight wouldn’t want to indicate a weight that was linearly related to the value of the nibble… each of the nibble’s 16 values will instead indicate weights at which the 4096-sample average has a large impact.
A weight of 16 doesn’t mean to multiply this 4096-sample average by 16 when finding the mean. 16 is the MAXIMUM. 16 means “multiply this 4096-sample average by the largest possible coefficient that won’t cause the sample to clip after finding the mean and adding the offset”
The value of the nibble will literally indicate the distance between 1 and this largest possible coefficient. Or perhaps the square root of the distance.
The systems could also be mixed.