<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>research | Kyle Godbey - Physicist</title><link>https://kyle.ee/tags/research/</link><atom:link href="https://kyle.ee/tags/research/index.xml" rel="self" type="application/rss+xml"/><description>research</description><generator>Source Themes Academic (https://sourcethemes.com/academic/)</generator><language>en-us</language><copyright>© 2024</copyright><lastBuildDate>Mon, 07 Sep 2020 00:00:00 +0000</lastBuildDate><item><title>Quantum Computing Explorations</title><link>https://kyle.ee/post/quantum_computing/</link><pubDate>Mon, 07 Sep 2020 00:00:00 +0000</pubDate><guid>https://kyle.ee/post/quantum_computing/</guid><description>&lt;p>In the past year, I&amp;rsquo;ve had the pleasure of jumping into a great number of new things. The biggest shift, of course, was starting my new position at Texas A&amp;amp;M in May, 2020. The transition was a bit harder than usual, thanks in part to the global pandemic, but the social isolation paired with moving to a strange new land permitted me to allow myself the pleasure of getting into things that may not have an immediate payoff. While I hope to put together another of my famed &amp;ldquo;Yearly Updates&amp;rdquo;, this post will highlight some interesting excursions into quantum computing that I stumbled into earlier this year.&lt;/p>
&lt;p>The starting point for most of this was the announcement of the &lt;a href="https://quantum-computing.ibm.com/challenges">IBM Quantum Challenge&lt;/a> in April of this year. I actually missed the announcement, however, and got started halfway through the four day event and to make up lost ground. The exercises in the challenge started out at the introductory level and then focused on some specific problems and implementations in quantum computing, culminating in a bit of an optimization problem where you were meant to find an optimal circuit representation of a generic unitary matrix. The first three challenge went rather fast (with the third one in particular being a fun python problem in addition to implementing a quantum algorithm), though I struggled with the final one for a few hours on the final day. I did manage to &amp;ldquo;solve&amp;rdquo; the challenge, though my solution was quite inefficient and was far from the optimal solution at the top of the leaderboards. I eventually managed to match the second highest score after the challenge by reading an information thread someone posted, but I was much prouder of my own, hacky method. If you&amp;rsquo;d like to try your hand at the challenges, you can find them &lt;a href="https://github.com/qiskit-community/may4_challenge_exercises">on Github&lt;/a>. Unfortunately I did my solutions on the IBM notebook server, so I can&amp;rsquo;t readily link them here.&lt;/p>
&lt;p>The next big personal milestone in getting involved with this exciting field was another event that I got engaged with through random chance as part of the &lt;a href="https://sciathon.org/">Lindau Sciathon&lt;/a>. I was scrolling through the project descriptions when I saw one that wanted to build an open science platform that made it easier to get into quantum computing. Once we got started with the full group, we realized that the focus should turn away from the hardware-first approach that the project proposal had put forth and instead try to make something more general that could be picked up by anyone and could be expanded to include extensions to the project in the future. We settled on a simple story-based approach to explaining complex topics in the same vein as &lt;a href="https://www.amazon.com/Quantum-Physics-Babies-Baby-University/dp/1492656224">Quantum Physics for Babies&lt;/a> by Chris Ferrie. We built this story within a Jupyter notebook so that anyone could also follow along with the implementations of these concepts as well. You can find a link to the notebook on Google Colab &lt;a href="https://colab.research.google.com/drive/1xriXygiSEda0OVwXIAN2KjNmbkVVpXSh">here&lt;/a> and can step through it after making a local copy. The code relies on a library hacked together by Martin Pauly and myself that was originally written for the IBM Quantum Challenge mentioned above, so things have truly come full circle. To see a more detailed overview, check out the great piece written by Aleksander Kubica for the Caltech Quantum Frontiers blog &lt;a href="https://quantumfrontiers.com/2020/07/03/what-can-you-do-in-48-hours/">here&lt;/a> and many thanks to my teammates Shuang, Martin, Aleksander, Hadewijch, Saskia, Michael, Bartłomiej, Ahmed, and Watcharaphol.&lt;/p>
&lt;p>I&amp;rsquo;ve had a blast this year with my two spontaneous dives into quantum circuit design and implementation, and I sincerely hope that my exposure doesn&amp;rsquo;t end there. After getting the basics dumped on me during the IBM weekend challenge, I&amp;rsquo;ve began looking into domain specific problems and solutions and have started to form a few potential ideas for how I can bring this technology to the research questions I care about most. I&amp;rsquo;m finding some excellent work is being done in this field and it&amp;rsquo;s quite honestly hard to keep up with the impressive advances that occur on the regular. If anything cool comes out from my playing in the quantum sandbox, expect a feature on it when I compose my future yearly update posts.&lt;/p></description></item><item><title>Physics ex Machina</title><link>https://kyle.ee/post/physics_ex_machina/</link><pubDate>Mon, 16 Sep 2019 00:00:00 +0000</pubDate><guid>https://kyle.ee/post/physics_ex_machina/</guid><description>&lt;p>The following was originally posted on the Lindau Nobel Laureate Meeting blog &lt;a href="https://www.lindau-nobel.org/physics-ex-machina/">here&lt;/a>.&lt;/p>
&lt;p>For the most part, physics is taught with the pen and paper approach to solving problems in mind. Be it through a lengthy derivation during a lecture or a particularly tricky exam question, the only problems worth solving appear to be those that are tractable and neat, at least early on in the physics curriculum. This generalisation is dissipating with time and most undergraduate physics courses have begun incorporating more and more data collection and analysis in their hands-on labs, though not much is typically expected from these early forays into data handling beyond the manipulation of a few pre-prepared spreadsheets. Moving to the ‘real world’ that is physics research, physics subdisciplines and research areas find themselves requiring the help of complicated computational techniques to handle the complexity and volume of data that is encountered in their individual quests for knowledge.&lt;/p>
&lt;p>It is clear now, more than ever, that proficiency with computers, programming, and computer science is extremely important for most physicists no matter their interests. In this short post I will focus on but one small branch from the larger tree that is computer science and examine how some physicists are picking the fruits of computational research and applying it to their own projects.&lt;/p>
&lt;h2 id="machine-learning">Machine Learning&lt;/h2>
&lt;p>The particular field I’ve chosen to highlight here is that of machine learning (ML). ML is a natural point of focus as it is currently one of the hottest fields around, with nary a day going by without a multibillion dollar tech giant referring to ‘Deep Learning’, ‘Artificial Intelligence’, or ‘Big Data’ being what drives their various projects. Buzzwords aside, ML can be broadly defined as techniques that allow computers to perform tasks without being given explicit instructions by the programmer. This vagueness is extremely powerful for both experimentalists and theorists in situations where there may be no closed form method to solve a particular problem. First, we will examine an example from experimental nuclear physics and how ML greatly simplifies a time consuming and arduous task.&lt;/p>
&lt;h2 id="the-experimental-case">The Experimental Case&lt;/h2>
&lt;p>Traditionally, many experiments in my particular field (low-energy nuclear physics) have relied on rather simple and brute-force techniques when it comes to data analysis. Data will be collected over the course of a several weeks long beamtime and then be examined by hand (with some preprocessing) to determine the outcome. For the history of the field this has been adequate and has allowed for some truly incredible discoveries. However, accelerators that are slated to come online in the next few years (e.g. &lt;a href="https://frib.msu.edu/">FRIB&lt;/a>) will produce much more data than previous experiments without a proportionate number of graduate students to analyse the data. Indeed, this problem of ‘too much data’ is one that has been encountered by many other fields and experimental endeavours like the Large Hadron Collider (LHC) at CERN which uses ML in a combination of online measurements to decide whether to take data or not and post-collection analysis to make understanding the data feasible.&lt;/p>
&lt;p>As one example, Michelle Kuchera from Davidson College has pulled from other fields of physics and recent research in the field of deep convolutional neural networks (CNN) to &lt;a href="https://arxiv.org/abs/1810.10350">take nuclear reaction data from the Active Target – Time Projection Chamber (AT-TPC) at Michigan State University and classify this data into specific categories of events&lt;/a>. This is an excellent example of what is called a classification problem and uses state of the art computational techniques in an attempt to remove a bottleneck from her research. Essentially, these CNNs are trained with images that have an assigned label and then are tested with new images that it has not seen before. This is the same technique that image recognition software uses to determine whether there is a face (and whose face it is) in a photo or video. It turns out that a lot of what is required in the data analysis process is simply recognising certain classes of events from an ‘image’ or experimental output in this case, and thus can bypass the human element to allow for the processing of vastly enlarged data sets. Next, we shall consider another tricky issue by way of a toy model in my home field of nuclear theory.&lt;/p>
&lt;h2 id="speaking-theoretically">Speaking Theoretically&lt;/h2>
&lt;p>Rather than examining an actual use case, I’ll present a toy model that uses a simple artificial neural network (ANN) and gets at what makes ML so powerful in some instances. For this example, consider the binding energy per nucleon of various nuclei. This quantity is often used as a test of nuclear mass models and there is ample experimental data to compare with. In contrast to a classic model like the liquid drop model (LDM) which assumes a particular form of neutron and proton dependence, I’ve constructed a very simple ANN which takes only proton (Z) and neutron (N) numbers as inputs and outputs a binding energy. The way the network comes to learn what binding energies should correspond to a given N and Z is by training the ANN to experimental data (the Ame2016 mass evaluation was used in this case). However, to avoid over-fitting, one breaks off a chunk from the full data set called the ‘test set’ which is set aside and only used at the end to check how well the network performs. This use case of an ANN is an example of a regression ML problem, which amounts to a fancy form of curve fitting. In the following figure I plot the output of the ANN for N and Z values that it was not trained on (the test set) on top of the full experimental data set:&lt;/p>
&lt;p>&lt;img src="https://kyle.ee/img/data.jpg" alt="Data">&lt;/p>
&lt;p>The preceding network was built using a few lines of Python code using Google’s TensorFlow library which provides an extremely powerful yet accessible framework for non-experts to experiment with ML.&lt;/p>
&lt;p>It should be reiterated that the structure of the ANN was completely arbitrary and general and no assumptions were made regarding what the output ‘should’ be and yet our test data falls nicely within the experimental data bounds. A hurdle that physicists must overcome when attempting to apply ML techniques to their own research is to embrace the arbitrariness, as we scientists will often try to include more information than is needed when designing the input and output data for an ANN. Including too much ‘physics’ can sometimes bias the network in a negative way and harm generality.&lt;/p>
&lt;h2 id="final-thoughts">Final Thoughts&lt;/h2>
&lt;p>By applying supervised learning ML techniques to various problems in physics, it is clear that ML offers a powerful set of tools to perform ill-defined or intractable tasks in both experimental and theoretical settings. While some of our colleagues have been using these specific tools for quite a while, we often learn of the existence of ML far too late in our careers. This is true of many other basic and advanced computational fields as well, which is a sign that physics education may be missing a critical learning opportunity when designing curricula. Ideally, physics departments would work closely with computer science departments to ensure their students were well versed in the technical skills and computational theory that may be useful, though even a basic programming overview in an introductory lab would be preferred to pretending that computational skills aren’t going to be necessary down the road. Indeed, close collaboration between computer science and physics has already led to some previously impossible feats, imagine what could be achieved if these skills were implanted at the beginning of our journeys through physics.&lt;/p></description></item></channel></rss>