Fighting with computers

Computers are not always friendly.

Wednesday, December 06, 2017

Helping video projectors to behave

My friend, artist, and colleague was planning an Arts exhibit and asked me a simple question: How can I best [automatically] switch off some video projectors I am using?

After some experience, I have come to realize that Interactive Art projects have the additional complexity of day-to-day starting and stopping. Most places have staff who can take care of operating an electric switch or a remote control, but anything more complex than that and you are in trouble and the success of your project may be jeopardized by improper setup. So it is the best interest of the artist to streamline the process as much as possible.

It is really not a problem to ask the staff to switch on an Arts installation using a remote control but, if you want your piece to shut down by itself you may need some convincing. What is really a bad idea, and unfortunately I am witnessing this with all kinds of equipment on campus, is to just remove power from the device you want to switch off. The reason is that many devices, from computers to video projectors to AC units require a specific shutdown sequence to make sure no damage is done.

Most video projectors will warn you against shutting them down by removing power. If you chose to ignore the warning you may quickly get in trouble (short light bubble lifespan and these are expensive). So what do you do for shutting them down automatically? My proposal is to transmit the same infrared signal the remote sends for powering it on and off using an Arduino. You can program the Arduino so it can power the video projector on and off when you see fit, making the human intervention unnecessary once installed.

But if you want an Arduino to transmit the "power on/off" code,  the first thing you need is to figure out what is the code the remote is sending. To do that an IR receiver is needed. The one I used is the TSOP4838, that works well with 38Khz IR remotes.


I have used IRLib2 with that TSOP4838 receiver, just plugged in on an Arduino UNO board and it worked flawlessly as the picture above shows (I just used the dump example that came with the library using pin 2 as data input). Like many other remotes, mine uses NEC format and the power button spits out code 0x8C73817E. Half of the work is done now.

Once you know the code you want to send, you can use the same library for sending purposes. By default digital pin 3 will be used for output. Depending on the distance you want to cover you can get away with powering the IR LED from the Arduino pin or not. Most of the time you want to cover a decent range and to achieve that you will use a transistor to boost the current on your LED to 50 or 100mA (depending on the specs of your LED). Some people do not even use a current limiting resistor in series with the IR LED as they claim the current pulses are so short and infrequent that the LED will not be damaged and emitted power is peaked this way. I just used a BD137 bipolar transistor and 100-ohm resistor in series with my IR LED.  Have a look at the rawSend example from the library to learn how to transmit an IR code.

Most of our video projection units require pressing the power button twice to power them down. After some experimentation, I settled on a 3-second pause in between the two transmissions (as longer or shorter pauses would make my attempt not to power down the projector). 

A detailed explanation of IR communications with Arduino can be found on this excellent video by Andreas Spiess.

Wednesday, November 08, 2017

AMD proprietary driver experience

A while ago I bought a new 4k display and my old graphics card could not handle it anymore so I bought a new card, AMD RX 460 with DisplayPort and HDMI outputs that would be ok for the new screen resolution. I bought that card that apparently had decent Linux support.

I did not notice then that in order to get it working I would need to move from 14.04 to 16.04 Ubuntu LTS, but I did the upgrade and it worked but with a software render mode that was quite slow.

Some more googling and I installed the proprietary amdgpu-pro driver, that worked but not ok, among other problems I got:

  1. Numbers on Google spreadsheets won't show when using Chrome (but did with Firefox)
  2. Openscad would crash when rendering a design
  3. Processing programs, any of the examples that uses P3D (OpenGL) would crash.
  4. When my kernel was upgraded from 4.4 I got a failed graphics driver, when trying to upgrade it did not work till I install HWE. And even then I needed to set nomodeset in grub
  5. I experienced random lock-ups, windows noise background and ocassional flicker, mostly when resizing. 
I reported them to AMD and while the second one was fixed with amdgpu-pro 17.30 the othres kept on happening after several upgrades. 

I ran away from Windows long time ago to get a better user experience and this driver brings back bad memories from the past. Definitely not the typical pleasant Linux experience of the last ten years.

So browsing away I learned about the Linux kernel driver support for my card and I removed the driver from AMD

amdgpu-pro-uninstall 

and installed the "open" one:


sudo apt-add-repository ppa:paulo-miguel-dias/mesa
sudo apt update
sudo apt install xserver-xorg-video-amdgpu

Now I am back in business without any of the problems I mentioned above associated with the proprietary driver. I am not sure if all the pain could have been prevented if I never attempted to use the proprietary driver in the first place. I will definitely remember that for future system upgrades.

Update Nov 28th, 2017

After today's system upgrade my graphics are broken again (software render is as slow as to cause real pain). Starting up a 4.4 kernel, at least keeps the graphics acceleration running but I am experiencing a level of disgust that is dangerous for the health of my graphics card.

Oops, why is my /etc/default/grub having a nomodeset in the kernel loading parameters? Getting rid of it get everything working once again.

Saturday, October 14, 2017

Getting work done faster on a CNC machine

I have been playing with CNC machines for a while and one idea of improving their performance
came to my mind: what if, like they do with processors, we could add some parallelism to the process to get more work done in the same amount of time? So a multi-core processor will be the analog of a CNC machine with several spindles.

As usual, a quick look at the Internet reveals that a "dual-gantry CNC" is not really a new idea as a few videos can be shown on youtube from some commercial units. Interestingly enough, there are just a few cases shown, which makes me think it is either a bad idea or too complex to work properly in most cases.

My plan here is to have two gantries that move independently and to follow the RISC approach: I will handle the dependencies in software so I will create to g-code files, each one feeding one of the gantries in a way that both gantries planing contains no collisions. I guess another approach could be to put in place some kind of collision detection system that would pause one gantry when a collision was about to happen, but that seems less efficient than creating a manufacturing plan containing no pauses (or collisions :-).


This first video shows how a sample job would be split into two different parts. If the motion is done in both cases left to right, there seems no gantry collision would happen. However, this raises the point of where to perform the cut. Just in the middle does not guarantee each gantry would have to work the same amount of time, and if that does not happen, then one gantry is going to finish sooner than the other, leading to unbalance workload and reduced overall efficiency.  

So a good workload balance needs to be obtained by dividing the sheet into two parts with similar workload (which usually would mean one side is wider than the other) as the next video illustrates.


However, though the left-to-right scanning pattern seems very appealing in terms of guaranteeing a collision-free motion for the gantries, it is quite difficult to obtain the best performance out of this pattern so a compromise will have to be done. Like the one shown in the next video, where motion evolves from left to right but some leeway is allowed so gantry can move right to left within certain limitations so the overall toolpath remains shorter than the one with a unidirectional motion yet still remains collision-free.

Please note that last video is not performing a real-time simulation of the approach, so do not be surprised if one half of the work finishes sooner than the other. The actual simulation shows that both halves would require exactly the same running time and therefore would finish at once.

If you've worked with similar systems do not hesitate to pitch in with your comment. If you haven't but want to share your ideas, you are welcome too.



Sunday, October 01, 2017

Random rant of the day

A couple of details made me waste some time till I figured them out. First one was an issue with povray 3.7 running on Linux that would preview a black background when I wanted a transparent background. Output was a PNG file and the final result was ok, but I failed to notice that there was an error with the command temporary output to the display and not with the final rendered file. I noticed the problem once I ran the same command with the same files on my Mac and preview shown the checkered pattern of a transparent background.

used with permission

But this does not mean the Mac versions are any better: Second problem, using Meshab 2016.12 version it was impossible to get a snapshot with a transparent background either. It appears it is a known issue too.  Same version of Meshlab but running in Ubuntu worked like a charm.

I had a third problem I can only blame myself for: it turns out STL files and Povray use a different coordinate space, so my renders appeared flipped horizontally. Nothing that ImageMagick cannot fix (convert -flop).  And yes, y-axis is up on povray, so instead of figuring out how to fix that there I just rotated the rendered bitmap so it looks z-axis is up instead.

Tuesday, September 19, 2017

Raspberry Pi is teaching me new tricks

For an upcoming Art project, I needed to make a few things using a Raspberry Pi 3. And while I have not yet figured out a neat way of setting the wifi configuration wirelessly (as they did for ESPlink), I have made some progress on other fronts needed and configuration-related.

First thing on the list was to make the RPi3 work with a 3.5" color LCD with touchscreen.  It was simple once I followed the right set of instructions.  Not sure if the display can be kept on while using the HDMI output but guessing no I removed the HDMI output for all my testing. This LCD display is 480x320 resolution and it can work as a minimum display for both text console and X11. You just do not want to browse the web with that.

While the display worked nicely, I wanted it to be kept working all the time but the power saving settings disabled it after being idle for a while. A bit of googling led me to a nice solution: to add the line xserver-command=X -s 0 dpms to /etc/lightdm/lightdm.conf. And oce reboot later the display is on all the time. Good.

I also wanted to show a fix image on the display once the system was on without windows decorations of a mouse pointer. So I created a simple Java program that handles the first part and used this solution for taking care of the second one.

Another part of the job is to figure out the IPv4 addresses of the RPi3 once it has successfully connected to a wired or wireless network (so I can connect to it as I have open the ssh service). For that I used a simple python program I wrote. The code will broadcast (using UDP) the network configuration of Ethernet and Wifi interfaces on the RPi3. If you have a laptop sharing the same network you can receive it using the conmmand nc -w 1 -ul 55555  



Monday, August 21, 2017

Reading local files on HTML5

I wanted to perform some calculations over STL files. I thought it would be nice to be able to do that within a modern browser so it would work on any computer without the need to install any binary. But I have never dealt before with reading local files using Javascript within an HTML page.

I was sure it had to be a way as sites like gcode.ws  or chilipeppr.com allow you to either select a local file to be analyzed or to just drag and drop it to be sent to a CNC machine. But my experience with Javascript is quite limited and I have never found myself at ease with that language (though it is mostly the mess of Javascript + HTML + server extensions + server database what makes usually programmer's life a living hell).

But given it was a long time without venturing myself into the intricacies of the File API I decided I would learn a new trick. Most of the trouble is the asynchronous behavior of the system that took me a while to understand.

In many languages, you open a file and then you read from it and then you close it. Using the new FileReader() class of HTML5 you can do it all at once, even if your file is several megabytes long. However, reading your file takes time, so they use an asynchronous design here to prevent long blocking calls that would make your browser unresponsive. So instead of your code waiting for a potentially long-ish call, a certain callback function will be called once the read operation is over.

This mode of operation means that whatever you want to do with the file contents cannot be placed just next to the file read operation, as it may not have finished at all when that line is processed.  Associated with FileReader class there are certain events that signal different moments of the read operation. The FileReader.onLoad event handler is what we are interested in for signaling our file has been entirely read.

A second event handler is needed so we can pass the filename selected to the FileReader to perform the actual reading. For security reasons, we cannot hardcode a filename to read.  The code above can be found here and it will allow you to select a text file that will later be shown below the button.

Files can be read in different ways, as Text, as DataURL or as a byteArray depending on your needs and the type of file.  For my needs, I used the byteArray option that made me more or less easy to parse the STL. I use the base of the parseBinary function from Three.js and in the process, I discovered an error that reported to the project's git. However, the solution I suggested breaks something too.

Sunday, July 30, 2017

Spanish Stick Font for your project

I have a project where I need to draw some text on a plotter and while I could just use a vectorized Truetype font, that usually comes with the undesired effect of doubling the printing time plus leaving a white background, as the font strokes are made of two different lines.


A stick font consists of a single line. Just googling around I found this program. That happens to include a set of font files that were easy to parse (as they were text files).  This is the font I selected for my purposes:


Originally it did not include the full set of Spanish characters but that was easy to fix, as you can already see them on the picture above. Plus I realized some of the existing characters could be optimized so drawing time could be reduced. 

Maybe there is a better way as I have a very limited knowledge about font standards. But this approach makes sense to me. Now I just need to create a simple script to go from text to SVG or G-CODE. 

Update: I just put my code where my mouth was.