If you already have the serial code and you have already decided that the thesis framework is accelerating this specific code using OpenACC then the "hottest" topics could be "which adaptations of the original code/algorithm can be done to help OpenACC in improving the application's performance" and "how does your proposals work on different GPUs (if you have more than one available), i.e., how general they are".
Of course, a comparison with the corresponding CUDA implementation can also be interesting.
The following link provides some step by step instructions for what you are trying to accomplish. However, I would recommend that you consider OpenMPI/MPICH + OpenCL in order to avoid vender lock-in for either your gpu or compiler. Things could have changed since I last considered OpenACC, but I believe that the PGI acceleration directives only work for NVidia GPUs for now. The GPUs from AMD provide much higher double precision performance and work well with OpenCL.
I've been to a PGI workshop about CUDA and OpenACC. What I took from this workshop is that OpenACC is hard to configure to get really good performance. If you want to get close to good performance you need to know as much about the GPU as you need to know when using CUDA or OpenCL. There are special directives to copy data between CPU and GPU memory.
The only advantage that OpenACC brings is that you have the same source code for the CPU and the GPU, even if you are using a compiler that does not support OpenACC (then you get the CPU code only). Also, what I took for our simulation software from that workshop is that it is not applicable. For good use of automatic parallelization on the GPU you need small loops. Our loops on the other hand sometimes have more than a 1000 lines. Put that into OpenACC and it will not work (probably because the GPU memory is too small) or it will be really slow.
Furthermore, we are mainly using the Intel compiler. This compiler will never support OpenACC. Instead it supports OpenMP 4.0 which also has directives for supporting GPUs. I think that OpenACC might eventually die because of this. A lot of people already use OpenMP and can just add a few more directives to make it work with GPUs.
I personally will choose OpenCL over OpenACC and OpenMP (and even CUDA). OpenCL also gives you the possibility to write a single code for GPU and CPU. It also makes you think harder about the algorithms that you choose. Just because of that I expect to have better performance with OpenCL over OpenACC. CUDA is not so interesting to me because it is not portable.