Fighting with computers

Computers are not always friendly.

Wednesday, November 08, 2017

AMD proprietary driver experience

A while ago I bought a new 4k display and my old graphics card could not handle it anymore so I bought a new card, AMD RX 460 with DisplayPort and HDMI outputs that would be ok for the new screen resolution. I bought that card that apparently had decent Linux support.

I did not notice then that in order to get it working I would need to move from 14.04 to 16.04 Ubuntu LTS, but I did the upgrade and it worked but with a software render mode that was quite slow.

Some more googling and I installed the proprietary amdgpu-pro driver, that worked but not ok, among other problems I got:

  1. Numbers on Google spreadsheets won't show when using Chrome (but did with Firefox)
  2. Openscad would crash when rendering a design
  3. Processing programs, any of the examples that uses P3D (OpenGL) would crash.
  4. When my kernel was upgraded from 4.4 I got a failed graphics driver, when trying to upgrade it did not work till I install HWE. And even then I needed to set nomodeset if grub
  5. I experienced random lock-ups, windows noise background and ocassional flicker, mostly when resizing. 
I reported them to AMD and while the second one was fixed with amdgpu-pro 17.30 the othres kept on happening after several upgrades. 

I ran away from Windows long time ago to get a better user experience and this driver brings back bad memories from the past. Definitely not the typical pleasant Linux experience of the last ten years.

So browsing away I learned about the Linux kernel driver support for my card and I removed the driver from AMD

amdgpu-pro-uninstall 

and installed the "open" one:


sudo apt-add-repository ppa:paulo-miguel-dias/mesa
sudo apt update
sudo apt install xserver-xorg-video-amdgpu

Now I am back in business without any of the problems I mentioned above associated with the proprietary driver. I am not sure if all the pain could have been prevented if I never attempted to use the proprietary driver in the first place. I will definitely remember that for future system upgrades. 

Saturday, October 14, 2017

Getting work done faster on a CNC machine

I have been playing with CNC machines for a while and one idea of improving their performance
came to my mind: what if, like they do with processors, we could add some parallelism to the process to get more work done in the same amount of time? So a multi-core processor will be the analog of a CNC machine with several spindles.

As usual, a quick look at the Internet reveals that a "dual-gantry CNC" is not really a new idea as a few videos can be shown on youtube from some commercial units. Interestingly enough, there are just a few cases shown, which makes me think it is either a bad idea or too complex to work properly in most cases.

My plan here is to have two gantries that move independently and to follow the RISC approach: I will handle the dependencies in software so I will create to g-code files, each one feeding one of the gantries in a way that both gantries planing contains no collisions. I guess another approach could be to put in place some kind of collision detection system that would pause one gantry when a collision was about to happen, but that seems less efficient than creating a manufacturing plan containing no pauses (or collisions :-).


This first video shows how a sample job would be split into two different parts. If the motion is done in both cases left to right, there seems no gantry collision would happen. However, this raises the point of where to perform the cut. Just in the middle does not guarantee each gantry would have to work the same amount of time, and if that does not happen, then one gantry is going to finish sooner than the other, leading to unbalance workload and reduced overall efficiency.  

So a good workload balance needs to be obtained by dividing the sheet into two parts with similar workload (which usually would mean one side is wider than the other) as the next video illustrates.


However, though the left-to-right scanning pattern seems very appealing in terms of guaranteeing a collision-free motion for the gantries, it is quite difficult to obtain the best performance out of this pattern so a compromise will have to be done. Like the one shown in the next video, where motion evolves from left to right but some leeway is allowed so gantry can move right to left within certain limitations so the overall toolpath remains shorter than the one with a unidirectional motion yet still remains collision-free.

Please note that last video is not performing a real-time simulation of the approach, so do not be surprised if one half of the work finishes sooner than the other. The actual simulation shows that both halves would require exactly the same running time and therefore would finish at once.

If you've worked with similar systems do not hesitate to pitch in with your comment. If you haven't but want to share your ideas, you are welcome too.



Sunday, October 01, 2017

Random rant of the day

A couple of details made me waste some time till I figured them out. First one was an issue with povray 3.7 running on Linux that would preview a black background when I wanted a transparent background. Output was a PNG file and the final result was ok, but I failed to notice that there was an error with the command temporary output to the display and not with the final rendered file. I noticed the problem once I ran the same command with the same files on my Mac and preview shown the checkered pattern of a transparent background.

used with permission

But this does not mean the Mac versions are any better: Second problem, using Meshab 2016.12 version it was impossible to get a snapshot with a transparent background either. It appears it is a known issue too.  Same version of Meshlab but running in Ubuntu worked like a charm.

I had a third problem I can only blame myself for: it turns out STL files and Povray use a different coordinate space, so my renders appeared flipped horizontally. Nothing that ImageMagick cannot fix (convert -flop).  And yes, y-axis is up on povray, so instead of figuring out how to fix that there I just rotated the rendered bitmap so it looks z-axis is up instead.

Tuesday, September 19, 2017

Raspberry Pi is teaching me new tricks

For an upcoming Art project, I needed to make a few things using a Raspberry Pi 3. And while I have not yet figured out a neat way of setting the wifi configuration wirelessly (as they did for ESPlink), I have made some progress on other fronts needed and configuration-related.

First thing on the list was to make the RPi3 work with a 3.5" color LCD with touchscreen.  It was simple once I followed the right set of instructions.  Not sure if the display can be kept on while using the HDMI output but guessing no I removed the HDMI output for all my testing. This LCD display is 480x320 resolution and it can work as a minimum display for both text console and X11. You just do not want to browse the web with that.

While the display worked nicely, I wanted it to be kept working all the time but the power saving settings disabled it after being idle for a while. A bit of googling led me to a nice solution: to add the line xserver-command=X -s 0 dpms to /etc/lightdm/lightdm.conf. And oce reboot later the display is on all the time. Good.

I also wanted to show a fix image on the display once the system was on without windows decorations of a mouse pointer. So I created a simple Java program that handles the first part and used this solution for taking care of the second one.

Another part of the job is to figure out the IPv4 addresses of the RPi3 once it has successfully connected to a wired or wireless network (so I can connect to it as I have open the ssh service). For that I used a simple python program I wrote. The code will broadcast (using UDP) the network configuration of Ethernet and Wifi interfaces on the RPi3. If you have a laptop sharing the same network you can receive it using the conmmand nc -w 1 -ul 55555  



Monday, August 21, 2017

Reading local files on HTML5

I wanted to perform some calculations over STL files. I thought it would be nice to be able to do that within a modern browser so it would work on any computer without the need to install any binary. But I have never dealt before with reading local files using Javascript within an HTML page.

I was sure it had to be a way as sites like gcode.ws  or chilipeppr.com allow you to either select a local file to be analyzed or to just drag and drop it to be sent to a CNC machine. But my experience with Javascript is quite limited and I have never found myself at ease with that language (though it is mostly the mess of Javascript + HTML + server extensions + server database what makes usually programmer's life a living hell).

But given it was a long time without venturing myself into the intricacies of the File API I decided I would learn a new trick. Most of the trouble is the asynchronous behavior of the system that took me a while to understand.

In many languages, you open a file and then you read from it and then you close it. Using the new FileReader() class of HTML5 you can do it all at once, even if your file is several megabytes long. However, reading your file takes time, so they use an asynchronous design here to prevent long blocking calls that would make your browser unresponsive. So instead of your code waiting for a potentially long-ish call, a certain callback function will be called once the read operation is over.

This mode of operation means that whatever you want to do with the file contents cannot be placed just next to the file read operation, as it may not have finished at all when that line is processed.  Associated with FileReader class there are certain events that signal different moments of the read operation. The FileReader.onLoad event handler is what we are interested in for signaling our file has been entirely read.

A second event handler is needed so we can pass the filename selected to the FileReader to perform the actual reading. For security reasons, we cannot hardcode a filename to read.  The code above can be found here and it will allow you to select a text file that will later be shown below the button.

Files can be read in different ways, as Text, as DataURL or as a byteArray depending on your needs and the type of file.  For my needs, I used the byteArray option that made me more or less easy to parse the STL. I use the base of the parseBinary function from Three.js and in the process, I discovered an error that reported to the project's git. However, the solution I suggested breaks something too.

Sunday, July 30, 2017

Spanish Stick Font for your project

I have a project where I need to draw some text on a plotter and while I could just use a vectorized Truetype font, that usually comes with the undesired effect of doubling the printing time plus leaving a white background, as the font strokes are made of two different lines.


A stick font consists of a single line. Just googling around I found this program. That happens to include a set of font files that were easy to parse (as they were text files).  This is the font I selected for my purposes:


Originally it did not include the full set of Spanish characters but that was easy to fix, as you can already see them on the picture above. Plus I realized some of the existing characters could be optimized so drawing time could be reduced. 

Maybe there is a better way as I have a very limited knowledge about font standards. But this approach makes sense to me. Now I just need to create a simple script to go from text to SVG or G-CODE. 

Update: I just put my code where my mouth was.

Saturday, July 08, 2017

Measuring weight with Arduino

While there are different solutions to achieve weight measuring, it seems the cheapest and neatest way is to use the HX711 amplifier and A/D converter.


Of course, a test weight is needed for you to be able to set the scale right. In my case that is going to be done by that idler that totals 11.30 grams. My sample load cell is a 100 grams unit so it is not that bad, but for a larger end scale, you want to have a test weight at least 10% of end of scale value.

What HX711 does is two things, on one hand, it is a precision low-noise amplifier for the load cell signal and on the other hand, it performs a 24-bit analog to digital conversion (much higher resolution than the 10-bit resolution of the Arduino built-in analog inputs).
Once data is converted to digital format, Arduino can read it from the HX711 using a two-wire serial interface. It basically streams the 24-bit value on one pin while the other provides the clock signal for the transfer. There are several Arduino libraries for that.
You want to wire the HX711 board (I used one I bought off eBay) to be close to the load cell. Example code is very simple and just provide you a numerical value that represents the digital number provided by the HX711 representing the current measurement.

My load cell is measuring around 106088 units (after I have divided by 100 the measured value to filter out most of the noise). And once I place the test weight on the load cell I get the measurement of 104635 units. That means the difference, that is 1.453 units, represents the weight of my test weight. Given that I know its real weight is 11.30 grams, that means there are 128.58 units per gram.

So, to measure in grams any other weight what I do is
grams = ( 106088 - measurement )/ 128.58;

Given my setup, each weight I measure is giving me a smaller number than the one I get when the scale is empty. If yours is the opposite, just invert the two terms (the zero-value and the measurement) or just consider the magnitude and not the sign of the value. 



Please note that even though I have divided the measured values by one hundred, there is still some jitter on the measured value. For better accuracy, you may want to average several measurements together.