In this project Monte Carle methods are used to simulate x-ray photon propagation throughout matter. By sending beams into compound matters then measuring the intensities of the beams on the other side, an idea of the buildup of the matter can be obtained. To give an idea of the damage caused by these rays and how to limit them, a graph of energydoses left by different initial inputenergies.
This first problem will be our foundation. The code is tested in $1D$, providing an intuitive understanding of the Monte Carlo method, and will be expanded for later problems. With this in mind, the code is quite generic and easy to reuse.
To set us of, a single photon is sent through a material of a given thickness. Later on the code expands for N photons, and the function mainCode() outputs an array of how many photons that that pass through still in the material after dx steps.
The photon does not always reach the end. It depends on the probability for transmission, which is not always one so the possibility of scattering exists.
From analyzing the curve we can determine that approximately 40% of photons pass. The number printed below the plot represents photons that have passed through and reached the detector. Roughly 2 in 5 times a photon passes the 10 centimeter thick material.
Since the probability relies heavily on mu, we need to make sure that it does not exceed 1. Then the plot would tell us that the material absorbs everything at the very beginning.
A high number of photons gives a continuos graph similiar to the analytic function. We plot the remaining photons so a small amount would mean a graph with obvious steps.
A larger stepsize would also mean a higher probability for scattering, resulting in a smaller number of photons that reach the detector.
As expected the attenuation coefficient for bone is higher, due to the higher density
The output intensity graph shows how a larger portion of the energy is absorbed before reaching the end in the case with bone in the middle for all energies. It is also noticeable that the difference in intensity seems to be constant for most energylevels.
Contrast is the relative difference of intensities and quantifies the colordifference seen of an x-ray image. A low contrast means an xray of just tissue would be similiar to one with bones in the material. This of course implies that we want a high contrast, so the differences in material that shape our image, become more obvious. However, due to X-rays impact on tissue we do not want the dose absorbed to be high, as it may damage the tissue.
As mentioned we do not wish to unnecessarily damage tissue. Even though its not plotted, the contrast divided by dose print from above show that the highest ratio occurs when photons with energies at $50$keV are sent in. This energy would give a good image, and at the same time not cause to m
When mapping a 3D-object with attenuation coefficients, the Monte-Carlo method is arguably not the most efficient way of proceeding. In task 1, the task was to plot the remaining pohotons at each step. This task however, is only to know how many photons that make it to the end. Therefore, one could argue that it is a waste of processing power to simulate the amount of photons at every step with the Monte-Carlo method. Considering this, we chose to solve the problem using the law of total probability. Multiplying 1-µ*dx for every step to get the probability of a single photon making it to the end is a much more efficient way to get a more precise end result within a reasonable time frame.
We did, however, solve the task using the Monte-Carlo method as well, but since efficiency is quite important in a project of this scale, we chose to only show one of the objects from one of the sides. The code for the other two sides however, should be correct.
The data was loaded using numpy.load, since it provides a three-dimensional numpy.array without having to convert or transpose anything. Instead of having to find out how to transpose the loaded array to get the right dimensions, three different functions were utilized to map the objects in the different directions. This makes readability, as well as implementation of additional, potential objects easier.
In this alternative method, each function creates a 2D array by itereting through the attenuation-probabilities in the x, y and z axis of an object respectively. A triple for-loop is utilized to iterate through the three dimensions of the matrix, and the values of the 2D image is calculated with the law of total probability. Since the probability of a photon being absorbed by an element dx is given by µdx, the chance of that photon passing through said element is 1-µdx. The probability of a photon passing through the entire object is therefore given by $(1-\mu_0dx)(1-\mu_1dx)(1-\mu_2dx)...(1-\mu_ndx)$. Using this method provides a usable approximation in a matter of seconds, rather than minutes or even hours.
Looking at our contourplots, we can comment on a few aspects. We note the clear difference between $20$keV and $50$keV radiation. The $50$keV brings out far better contrasts and provides a much more detailed image. This is the case for both objects. However contrasts from $50$ to $75$ or $100$keV are not as apparent. In both objects we can notice a bit more contrast, but it does not provide any new information.
We believe that object one is a set of human lungs, and if so, one should be careful to use a high dosage without a noticable contrast improvement. For this case the $50$keV image is the best choice to maintain safety and at the same time recieve the needed information.
As for object two, which seems to be a coffeemug, the safety is not at an essence. The only consequence could be some hydrogen and oxygen gasses, but the coffee would be safe to drink. Nevertheless a higher energy dose is more expensive and, as was the case for object one, not necessary.
The Monte Carlo method provides a good visual representation of x-rays. We find the method to be simple, intuitive and, with a great number of simulations, very precise. As for downsides, it really only relies on computational capacity and optimilization. Our code does not have a very long runtime, and as for parameters we need to trust them as they were given. $\mu$ is given, so we find it to be trustworthy and all other calculations seem reasonable. All in all a robust experiment.