Sunday 8 September 2024

Jelly Load with Artificial Sweetener

I should give a flashing images warning here. If you are affected by such things, give these post a miss.

We are still spending for more time on Jelly Load than any sensible person should, tweaking all the parameters to try to get it as fast and reliable as possible (those two things are normally in opposition with each other. Fast, cheap, reliable. Pick one)

I did look at adding a checksum to JellyLoad, but just couldn't get it to fit into the available space (the loader code has to run from the cassette buffer). It would have needed a bit more restructuring and rewriting, so I have given up on that for the moment.

There are some safety checks at the start, a sequence of patterns that for the header that have to match the expected sequence, then the load address needs to be valid.

That also checks you have the correct version of the loader. There needs to be three different builds for the three different VIC20 load addresses.

Memory
Load Address
Unexpanded VIC20
$1001
VIC20 + 3K
$0401
VIC20 + 8K or more
$1201

(Yay for standardization. Standards are great aren't they, there are just so many to choose from.....)

If/When I add this to the Penultimate Cartridge, I will need to tie it in to the RAM selection so you always have the right version for the RAM selected. (or I need to write a version of the loader which required 35K RAM to be selected and then fudges the VIC's memory detection so it switches to the appropriate mode)

Colour Bars

This is the test program we have been using recently. Credit to TFW8b for this one, it's quite a neat idea.

It is just a series of PRINTs, it is important that it is not just a loop.

If any of the program bytes are corrupted, skipped or repeated, it should immediately show on the screen.

One extra space caused by a rogue clock pulse will move everything else out of sync.

I don't know what's gone wrong here, but it's certainly not right.

Artificial Jelly Load

The current setup involves pointing a webcam at some LEDs. We have proved that can be done, and that it works.

I was wondering if I could bypass that step (and any potential problems that could introduce) and generate the video file programatically to give the best chance of success.

I did look at directly generating video from Python, and that seemed like it was getting complicated, so I decided to take an "easier" approach.

An "Easier" Approach

The plan was to create a folder of images, one for every combination of LEDs, and create an animated GIF or a video file from those.

I was thinking I needed to create images of all the possible patterns, and I could then create a folder of frames that were copies the appropriate images.

(side note, I am working on linux, so I will actually be creating symbolic links to the appropriate images, rather than copying them, but if it helps, just think of it as copying the files rather than creating shortcuts)

OK, so how many patterns are there? Well, it's 9 bits, so that is 2^9, or 512. Or put another way, each byte of the source file can be one of 2^8 or 256 values, and I need versions with the clock high and low, so 512 in total.

I could sit and colour in the appropriate bits of over 500 images, but I would rather not thank you very much.

So I need to generate these myself. Again, there are programmatic libraries for generating images, but I want the "easy" way.

I had worked out all I need to generate is a 3x3 pixel image. This can then be enlarged in the video player, and TFW8b was already applying a mask to the image to just leave between 0 and 9 clean white circles on a black background. (more recently this has become a transparent background)

I created a 3x3 test image and tried saving it in various file formats, to see how large the files were, and how easy it would be to generate a run of 512 of them automatically.

Most image files have headers that would need to be generated. The actual image data is usually compressed, either lossily like a JPG file, or lossless compression like a PNG file. Some file formats also contain palettes and even thumbnail previews of the larger image. All of this adds to the complication of generating these without getting bogged down into an image generation library.

I also looked into SVG vector images and the potential of creating a font, bit it seemed that standard raster images were the best option.

One file format that looked promising was .RAW files. Saving in that format produced a file that was only 9 bytes long, and it turned out that was one byte per pixel, either 00 or 01.

It also generated a six byte RAW.PAL file, presumably a palette file, with a 24 bit RGB value for the two index colours. 00 00 00 and FF FF FF, black and white respectively.

Great, that will do nicely.

I set about writing a simple bit of C code that generated all 512 of these 9 byte files.

512 files, check. Now I can use FFMPEG to render a video based on this folder full of images.

Well, no, it seems it does not support RAW files.

Damn. (should probably have checked that first)

Plan B.

I know it can work with PNG files, so I can use the imagemagick Convert program to convert all the RAW files to PNG.

Except that doesn't support RAW files either.

Damn. (should probably have checked that first)

Plan C.

I don't want to get into generating PNG files directly, so I just need to find a format that I can generate easily, and that can be easily converted to PNG.

OK, what file formats does Convert support?

I looked through various options doing the reverse process. Taking a 3x3 pixel PNG file and converting it to various supported formats that looked promising and seeing what size files they produced. All the ones I tried were too large / complex.

Then I found .MONO format and that looked interesting.

Not exactly sure what a "Raw bi-level bitmap" is, but it sounds like just what I need.

I converted the .PNG to .MONO and the resulting file was 3 bytes long. (.GRAY also looked interesting at 9 bytes, and would have worked, but I went with .MONO in the end)

I assumed that was a mistake, but when I loaded the .MONO file in GIMP, it was the correct pattern I had started with.

3 bytes? How?

Looking at the file in a hex editor it appears it is a line based bitmap.

05 02 05

The first (and last) line is 000 0101 in binary, and the last three digits match the pattern. The second is 0000 0010, again the last three match.

I wasn't sure of the bit ordering, so I created a test file to see if I had understood correctly.

07 00 01

Yes, I understood correctly, and it's starting with the leftmost pixel. So the line has a 1 if the leftmost pixel is black, 2 if the middle one, and 4 for the rightmost.

I modified my C program to generate 512 of these 3 byte .MONO files.

Perfect.

FFMPEG didn't accept them, but that's fine, I wasn't expecting it to.

I wrote a little bash script to convert all the .MONO files into .PNG files.

OK, look good so far. I think I will call these Jelly Moulds.

Now, I can feed those to FFMPEG and I generated a video.

That looks promising, so I set about writing a program to read in a PRG file and generate the appropriate series of frames.

The test program produced 3031 frames from a 1.5K PRG file (two frames per byte to allow the data to settle before toggling the clock).

I generated various MP4 files from that series of images, there are two parameters to set, both frame rates. The one that is normally used, the "r" parameter is the frame rate of the output video. 60 frames per second seems the optimal, and we have been able to feed these into and out of YouTube successfully.

The other is the "framerate" parameter, and this is the number of source images are used per second. So a frame rate of 1 would show one of these block patterns every second. A frame rate of 60 would show one pattern per video frame. This is the highest speed we could hope to achieve if the light sensors (and VIC20) were fast enough.In this case, 8 seems to work well, that works out 125mS per pattern, two per byte, so 4 bytes per second.

Mono files => PNG files => Symbolic links based on PRG file => MP4 files => OBS => YouTube => Light Sensors => VIC20 => PRG file loaded

I should really draw a flowchart.

Programatically generated Jelly Load. It's hypnotic. (N.B. I can neither confirm nor deny the presence of subliminal messages that may cause you to buy my stuff)

Look into the squares, look directly into the squares, the squares, the squares, not around the squares, don't look around the squares, look into the squares, you've loaded.....

Those videos are now off to TFW8b to experiment with. I think the squares version rather than the dots is favourite so far.

STOP PRESS

The latest tests include "Micro Jelly Load", a smaller and less obtrusive version. Hopefully will not be as distracting and disappear into the background like the flickering squares that used to indicate the adverts were about to start on ITV, or the white dots that indicate the end of a reel on old movies.

Side Quest

When I first tried to generate the MP4 file from the PNG images, it failed. 3 pixels wide was not appropriate as yuv420p video format requires an even line length.

I initially added a scale by 2 operation to the mono files in the convert operation. That produced 6x6 files, which worked, and generated videos, but on playback, the images were blurred as the video player performed some smoothing of the scaled images.

I think that would probably still look fine behind the mask, I thought it was best to scale the images to the size of the box in the final YouTube videos. I changed that to scale by 128, so 128x128 per square, 384x384 overall. That produces the cleaner square videos seen in the rest of the post.

Just watching those videos makes me think I have seen that before.

Ah, there we go. It seems someone else was experimenting with Jelly Load in the 80s.

They even tried the transparent version.


Advertisements

To load these programs, you will probably need a Penultimate +2 Cartridge

There is also my store with the full range of Minstrel and Mini PET accessories. Mini PET kits are all sold out now, but I do still have some Minstrel 3 kits left, which will last until I run out of Z80s.

I can ship worldwide, use the link at the top of the page to contact me with your location and what you want. Sorry I have to keep saying that. I am working on an alternative.

All the links can be found here:

Patreon

You can support me via Patreon, and get access to advance previews of posts like this and behind the scenes updates. These are often in more detail than I can fit in here, and some of these posts contain bits from several Patreon posts. This also includes access to my Patreon only Discord server for even more regular updates. If you want to see more posts like this, it would be great if you could give me a bit of support on Patreon, or via the PayPal link at the top of each blog post.

Sunday 1 September 2024

Introducing Jelly Load

A couple of months ago, I built a PET serial interface (blog post to follow), and was talking to TFW8b about it. They revealed a secret plan they had been working on was along similar lines. A scheme that would later be called ..... Jelly Load.

The idea was to inject data into videos on the TFW8b YouTube channel in the form of flashing symbols in the corner of the video. "wouldn't it be cool if you could download the game I was playing whilst you were watching the video, and then play it yourself?"

Initial suggestions were flashing eyes of the real Rod Hull, of the lights on the pictures of the SD2IEC, divMMC or Kung Flu Flash.

I had to dash TFW8b's dreams by application of mathematics. It's not certain what frame rate we would be looking at, somewhere between 24 and 60 frames per second, depending on what post-processing YouTube does to the videos - let's leave worrying about that until later.

Let's take the best option, 60 individual valid frames of video. If you could sync up the receiving machine (let's leave worrying about that until later) exactly to sample those 60 frames and there was no buffering, adverts, mouse pointers moved across the screens, screen savers, screen dimming, windows updates etc. (let's leave worrying about those until later).

Let's also assume you have a very precise receiver that is able to pinpoint when Rod's eye's go red and turn that into a clean serial bitstream of 0's and 1's (let's leave worrying about that until later).

Assuming all that is perfect, you would get 60 bits per second. With a bit of formatting for RS232, that means you would 6 bytes per second. Say you wanted to transmit a 32K program, that would take about 90 minutes.

That was the best case, worse case would be many times that, depending on how many clear frames could be used without interpolation to a lower frame rate starts to merge adjacent frames. In practice, it is likely to be more like 10 bits per second.

Let's also ignore the fact that the easiest way to include a game in a YouTube video is to put a download link in the description.

The requirements where then:

  • It must be included in the video and survive whatever post processing YouTube applies
  • It must be fast enough to fit in a normal video 5-10 minutes maybe
  • It must be received by a VIC20 (other machines will follow) using minimal simple hardware that would have been available at the time

Armed with those, let's see what can be done.

Prior Art

When this was being described to me, I said "That's Telesoftware, it was done in the 1980s".

That is partly correct. I think I got the name mixed up. Telesoftware was a service to provide programs for the BBC micro by including them in the teletext data broadcast by the BBC.

They produced a "cheese wedge" expansion for the BBC Micro which included a TV tuner that could extract this data and pass it on to the BBC Micro.

(insert picture of the teletext adapter cheese wedge I have in a box somewhere. It is sort of pointless after the service stopped and even more so after UHF broadcasts stopped, I had thought one day of reusing the case to build a second processor.... - let's leave worrying about that until later)

The relevant system was much simpler, it was in vision and used a flashing white square in the bottom left hand corner of the screen. You would put a suction cup on that part of the TV and wire up a simple circuit to the userport of your computer. I am pretty sure this was one of Ian McNaught-Davis programs on the BBC (Making the Most of the Micro, Micro Live etc.) or possibly Fred Harris over on ITV with Me and My Micro. But I can't seem to find any reference to any of those including it.

(There was a Retro Recipies video recently that looked at something done as part of a Channel 4 program using a similar system - https://www.youtube.com/watch?v=MezkfYTN6EQ)

In the absence of the real program, please accept an artists impression of what that might have looked like.

The white square on a black background (no doubt generated on a BBC micro) provides a high contrast signal that should be easy to detect with a simple photo transistor or light dependent resistor (we can worry about that later...).

Here the broadcast signal was a very solid 50Hz, with 25Hz frames, so 25 bits per second maximum. I don't think we can expect the same frame rates from a processed YouTube video, so we need to look at other options.

We talked through various options, including using a second sensor to detect a second signal that could be used as a clock.

The logical extension of that is to add 7 more sensors and make it 8 data bits and 1 clock bit. That means it can transmit a byte at a time, and with it's own clock there is no problem with synchronisation a clock at both ends like RS232.

With 60 clear frames, that could be as much as 60 bytes per second (which is about the same speed as loading from datasette). However in practice it is likely to be far less than that.

Testing

The steps involved worked out be as follows

PRG file 🡆 LEDs 🡆 Camera 🡆 YouTube 🡆 Sensors 🡆 Userport

There are several steps that can be cancelled out to make testing easier.

TFW8b built a simple LED unit and experimented with driving that from the userport, as well as starting to look at some of the sensors.

I was going to need to write the software for both ends, so I skipped the optical stage and wired directly to the VIC20 Userport. For testing, I was using an Arduino to generate the signals.

I started with some simple software that just printed out what was read from the port.

I tried various options for the clock, trying to avoid the need for extra frames to set and clear the clock.

I used a sort of "double data rate" clock as is used on some DRAM systems (that's the DDR in the name). Here data is clocked on the rising and falling edges of the clock signal, double the data rate of something clocked just on the rising edge.

The first byte is written with the clock low. Then the second by is written with the clock high, the third with the clock low. The VIC20 just needs to look for the clock pin changing polarity and read the data at that point.

So I need an 8 bit port for the data, and one input pin for the clock.

The Userport on the PET, VIC20, C64 and plus/4 are similar, but frustratingly different.

I need 8 input lines on a single port. The PET, VIC20 and C64 have this, and on the same pins, which is nice, so hopefully a single interface could be designed for all those machines. The plus/4 has pins all over the place in apparently random order, so I'll rule that one out for now.

Next a need a single input pin. Should be easy, right? Well, no. Several of the pins can only serve as interrupt sources, not ideal for initial BASIC testing, and interrupt handling is complicated on some systems, so Ideally I would just have a single IO pin. Unfortunately there are only a few of those on each userport, and none in the same places, so it is likely to be a different board for each system, or maybe jumpers to fit.

There is also the issue that the PET userport doesn't have a power output, so that will also need different connections.

So four similar but mostly incompatible userports.

Thanks Commodore.

Thomodore.

I got on with writing the transmit and receive software, sending from the Arduino to a VIC20.

This awful video shows the process of transmitting a simple 10 PRINT program in real time with something like a 1 second delay

That is proof of principle. How far can I push it?

The second video, also awful, also in real time, is the 30K Super Monza Grand Prix II with a considerably shorter delay.

LEDs and Light Sensors

That seemed to be working well, time to add some more steps. I had been just using

PRG file 🡆 Userport

So, over to TFW8b to add the next stage.

PRG file 🡆 LEDs 🡆 Sensors 🡆 Userport

The idea of keeping this as something that anyone could build from easily available parts, they were looking at some cheap light sensor modules from Amazon.

These seem to be available in two types, one using a photo diode, and one a light dependent resistor.

I took some photos and reverse engineered the schematic.

One moment please.

This is the photodiode version, it uses half of an LM393 dual comparator. One input is a potential divider containing the photodiode, the other an adjustable threshold voltage. The output should be 0V or 5V, depending if photodiode input is higher or lower than the threshold.

The light dependent resistor version is very similar, but is missing the 4th pin to monitor the analogue voltage divider (which we don't need anyway).

Both versions have a capacitor across the light sensor element to smooth the response. That's not ideal in our application, and it might help to remove those.

I did some testing with both versions and found the photodiode one was no good, it could not detect the difference between a black and a white square on one of my monitors. I tried replacing it with a couple of different photodiodes and phototransistors I had, but none of them seemed any good for this application.

The LDR version performed a lot better, and seems to be the best option. TFW8b's came to the same conclusion, so LDRs it was.

TFW8b got to work building up a transmitter module using 9 LEDs in a 3D printed mount. (the 3D models for all of these are available on from TFW8b - https://www.tfw8b.com/introducing-jelly-load)

A similar mount was then made to take 9 of those light sensor modules.

I said 9.

Thank you.

During testing, it became apparent that the response time was slightly different for each of the sensors.

You can see the ends aren't all level, so a slightly different incident angle gives a faster or slower response. There are also the tolerances of the parts involved and the adjustments of the threshold level.

In order to get around that an extra delay was added before the clock was toggled to give the data lines time to settle. Good for stability and reliability, but unfortunately means a lower eventual data rate.

With the delay added, things started working well, so the next step was to add in the video element.

The receiver is currently quite bulky with those boards and wires, so the best result seemed to be standing it on a phone or tablet and playing the video on that (suitable zoomed in).

I wrote a bit of test software which just echoed what each sensor was currently reading, so they could be adjusted and aligned correctly.

TFW8b was able to keep knocking down the delay times to see how fast things could be pushed.

Sending data via YouTube

With real YouTube videos, it is now looking like about 4 bytes per second is reliably working.

That means a 3.5K program takes about 15 minutes to load. Not too far from the target we were aiming for, and the full sequence of events.

PRG file 🡆 LEDs 🡆 Camera 🡆 YouTube 🡆 Sensors 🡆 Userport

Some sample videos have now been uploaded, with the Jelly Load square top left.

And one showing the receive program in action.

If anyone wants to try this for themselves, the 3D models and VIC20 programs are available form links at the bottom of this page from TFW8b:

https://www.tfw8b.com/introducing-jelly-load/

There is also a video quite to building a receiver for yourself, including an 8K expanded VIC20 game sent via Jelly Load to you to "download" and play.

We plan to make an all in one module at some point with 9 sensors and a single adjustment pot, as well as receive software for the PET and C64. I am also working on a direct VIC20-VIC20 version, a sort of "laplink cable" that can be used to transfer software between machines in a much slower and more inconvenient way that just putting it on an SD card.


Advertisements

To load these programs, you will probably need a Penultimate +2 Cartridge

There is also my store with the full range of Minstrel and Mini PET accessories. Mini PET kits are all sold out now, but I do still have some Minstrel 3 kits left, which will last until I run out of Z80s.

I can ship worldwide, use the link at the top of the page to contact me with your location and what you want. Sorry I have to keep saying that. I am working on an alternative.

All the links can be found here:

Patreon

You can support me via Patreon, and get access to advance previews of posts like this and behind the scenes updates. These are often in more detail than I can fit in here, and some of these posts contain bits from several Patreon posts. This also includes access to my Patreon only Discord server for even more regular updates.

Sunday 25 August 2024

Another monitor repair

I don't know what it is about old monitors recently I seem to be having bad luck. I had another one start to flicker. Usual problem, failing capacitors in the power supply.

This Acer from December 2007 also seemed to be unnecessarily difficult to service.

Everything behind metals panels.

And then the boards are upside down, but you can't just take out the one with the capacitors on.

There are captive cables soldered at one end that you can't get the other side of to unplug, just like the Phillips one I did previously - http://blog.tynemouthsoftware.co.uk/2024/06/annoying-lcd-monitor-repair.html

I don't think anyone will be surprised to see several caps with their vets bulging at the top.

I don't know if they deliberately place the capacitors right next to the heatsink to ensure they fail so they can sell you a new monitor in a few years time*. They don't need to be there, the traces run up to the heatsink for no reason, they aren't connected to any of the devices attached to it.

* OK, these lasted from 2007 until 2024, but let's not split hairs

Take the two green ones for example. They are both marked 1000µF 10V. One almost touching the heatsink, and one set back slightly.

The one away from the heatsink was sort of OK, 90% of rated capacity.

The other one that was almost touching the heatsink was down to half of it's marked capacity. Same age, same brand, same ratings.

The big one at the other side of the heatsink wasn't even bothering anymore.

Time for the replacements. Something you don't often see when people are recapping things is testing the new parts. Are they actually any better? (in some cases, I would argue they are not, but let's not get into that).

You can often get better choices by going for physically larger capacitors, and that usually means going for a higher working voltage than originally fitted.

In the case of the two 1000µF 10V caps, I tried two options. The first is the ones I normally use these are "low ESR" capacitors, and rated for 105°C use.

0.16Ω isn't bad for ESR (the lower the better), and the capacitance is 1062µF, so a touch higher than the rated value of 1000µF, which is good.

Stepping up to a 16V capacitor, that also reads a bit high in capacitance terms, but does even better in terms of ESR. However those are only 85°C rated, and these things are unnecessarily close to the heatsink as previously discussed, so I went for the 105°C rated parts.

The replacement for the larger one looked like it would be too tall, so I fitted a larger axial one on it's side. Which also has the benefit of moving it further away from the heatsink, so should extend it's life futher.

Another monitor resurrected.


Advertisements

I don't sell 20 year old monitors in my store, but I do have the full range of Minstrel and Mini PET accessories. Mini PET kits are all sold out now, but I do still have some Minstrel 3 kits left, which will last until I run out of Z80s.

I can ship worldwide, use the link at the top of the page to contact me with your location and what you want. Sorry I have to keep saying that. I am working on an alternative.

All the links can be found here:

Patreon

You can support me via Patreon, and get access to advance previews of posts like this and behind the scenes updates. These are often in more detail than I can fit in here, and some of these posts contain bits from several Patreon posts. This also includes access to my Patreon only Discord server for even more regular updates.