Science topic

# C - Science topic

Explore the latest questions and answers in C, and find C experts.
Questions related to C
Question
I would like to know how to implement the basic concept of LBP operator with uniform patterns. I have implemented the basic concept of LBP operator in MATLAB. Now I am using the concept of uniform pattern.
The problem is that I am not able to understand the basic concept of LBP with uniform patterns. How exactly should it be done? Is there any documentation available which can help me?
It uses a mapping table, since uniform LBP are only a subset (in a manner of speaking) of all calculated LBPs. You can find more info in the papers available here, and also the Matlab code.
It enables you to map to different combinations such as uniform or rotation invariant.
Question
I want to know either standard functions (for example sqrt ) inC/C++, can it be redefined or not? If yes, then how?
In case the question is aimed in overloading existing C++ functions, I agree with Peter. Nonetheless, as a C++ teacher (and developer) I strongly recommend to avoid such overloading.
The principal problem is that if someone (maybe including you) later finds in the code “sqrt” or other function, he will think, you used default implementation. This can lead to confusion and mistakes.
I you want to create your own implementation of common functions, use your own namespace. In the code then use the name of the namespace before the function. Then will be clear which function was used. (Do not write “using namespace math;”. That can lead to similar confusion!)
Cheers
David
#include <iostream>
#include <cmath>
// namespace with appropriate name
namespace math{
double sqrt (double x){
double result = 1.0; // your own calculation
return result;
}
};
int main(int argc, const char * argv[])
{
// common function
double commonResult = sqrt(23.5);
std::cout << "Common: " << commonResult << std::endl;
// my special calculation, it is obvious from the namespace
double myResult = math::sqrt(23.5);
std::cout << "My: " << myResult << std::endl;
return 0;
}
Question
Which are the advantages and the disadvantages of the most common languages?
Do we have to work only with "low level" languages like C/C++?
There are really two kinds of languages:
- languages that are designed to minimize the time programmers spend programming
- languages that are designed to minimize the time computers spend computing
The first category are languages that are easy to program in. It is composed of interpreted languages: Python, Julia, R, MATLAB... The most popular free alternative is likely Python. In some domains, like biology or statistics, it might be R. If you feel the visceral need to cough up some cash, then you could buy a MATLAB license. They all offer easy ways to plot graphs, do complex mathematics (eigenvalues, inverse matrices, conjugate gradient, Monte-Carlo), query databases, the web...
The second category is composed of C/C++ and - God forbid - clunky old Fortran. They are harder to program but should be more computationally efficient. If the programs are well written.
The default option should be to minimize the time you spend programming and go with the first category. Your time is valuable, the computer's is cheap. So minimize the time you spend programming and maximize the time doing actual physics. As Google puts it (and they do a fair amount of compute): Python where we can, C++ where we must.
There is a tired argument that to do high performance computing it is absolutely necessary to use low level languages such as C and C++. However, most high-performance physics software will devolve the heavy lifting to external libraries, such as BLAS, LAPACK, FFTW, and others. In that case, whether there libraries are called from a language of the first or second category has little implication on the speed code.
Furthermore, most languages of the first category offer some easy way to interface with languages of the second category. Most physics software is such that the computer spends most of its time in 5% of the code, eg in the kernel. So it is always possible to move that kernel to C/C++ later on and interface with it in the interpreted language. The best of both worlds! Google Cython for the python way of doing this.
Question
I'm looking for Linux command-line tools (or, even better, routine libraries in C/Fortran/Pascal) calculating knowledge-based protein structure quality scores. I know procheck - that'll suit me well, but I do not need the multitude of PostScript plots it creates, because I am looking forward to process millions of protein conformations and to obtain a robust ranking by this knowledge-based "goodness" score, in complement to classical force field energies. Do you have any knowledge of a way to switch .ps file generation off in procheck? Do you know other similar tools which just focus on scoring, not on detailed reporting?
Question
We have recently developed a tracking system capable of following several individuals keeping their correct identities in spite of crossings or overlaps (www.idtracker.es). It will be published in Nature Methods in June. The current version of the software is in Matlab, and we are now considering changing to another programming language. The main reason to change is to make it easy for other developers to join the project.
We have two questions, and even if you are not an expert in the pros and cons of the different languages, we would like to know your opinion about the first one:
1- If you considered contributing to an open-source project, what programming language would be easier and more attractive for you?
2- What is in your opinion the most adequate programming language for our project, and why?
Our requirements are:
- Powerful image processing tools
- Possibility of having a user-friendly interface
- A large community of developers
- Multi-platform
- Simplicity of the language
- Good performance
Depends on what stage of your project. Perhaps if you are prototyping, Matlab or Python could be useful.
However, for production purposes, go with C/C++.
You might what to check out OpenCV: http://opencv.org/
Question
I need to run a software which number of the functions written in C while some other is in Matlab. I want to run the software by MATLAB.
Mex is better option if you have older version of Matlab else look for live webinars on mathworks website, you will find many live demonstrations as well.
Question
Can anyone name some tools, how it can be done on nowadays multi-cores?
Automatic parallelization works only in simple cases. For anything non-trivial you have to do it by hand. There is a vast literature on parallelization methods and efficiency. In shared memory (including multicore), OpenMP is probably the easiest to start with, as pointed out by Thoralf. Intel Threading Building Blocks (TBB) is another attractive option if you use C++ only. Beyond shared memory, the dominant model is MPI (the Message Passing Interface).
Question
How to pre-process code written for a high level language to be compiled better for parallelism on parallelizing compiler?
There are a few tools that can be had, see http://en.wikipedia.org/wiki/Automatic_parallelization_tool. Additionally, some proprietary compilers have options to do auto parallelism - though anything machine generated will only be a start.
OpenMP directives can be manually added easily to loops such that each iteration is actually a separate thread running concurrently. The caveat on this is that updating a shared variable between the threads becomes problematic, and will automatically reserialize the code, even if it is running as separate threads.
There is lots of research in this area because the growth of multicore/manycore chips is forcing the rethinking of key algorithms currently in use. Just making them multi-threaded may not be enough to guarantee correct results - the algorithm itself may have to be rewritten.
Question
Is there a A* implementation for nvidia gpus? Or a Library for HPC stuff on GPGPUs at-least? Some real life parallel algorithms implemented on them would be good too.
I think, if you are interested in GPU computing applications, the best way to go for fully implemented CUDA codes are the CUDA samples from NVIDIA. These samples are available online and they cover many diverse applications.
Question
The question was,
In Bouboun University's information systems 3 arrays are used to store student details,
1. Stud_Names (array of strings) holds the name of 100 Students
2. Stud_Map (array of strings) hold the ids of the 100 students (whose details are stored in Stud_Name)
3. Stud_Marks_Mod1 (array of floats) holds the grades of grades of each of the 100 students in Module1
4. Stud_Marks_Mod2 (array of floats) holds the grades of grades of each of the 100 students in Module2
5. Stud_Marks_Mod3 (array of floats) holds the grades of grades of each of the 100 students in Module3
Write a program in C which computes the average marks for each student and displays the highest average.
Problem: When I enter details for the first student it is correct but when I enter the second student details I cannot enter the name. It skip it and student ID come in its place. Any advice on how to correct the codes?
My program:
#include <stdio.h>
#include <string.h>
int main()
{
char Stud_Names[100];
char name;
char Stud_ID[100];
float Stud_Marks_Mod1[100];
float Stud_Marks_Mod2[100];
float Stud_Marks_Mod3[100];
const int MAXLENGTH = 100;
int b=1,i,sum=0;
float avg,avg1;
int max=0;
for(i=0;i<2;i++)
{
printf("Enter student %d name: ", b);
fgets(Stud_Names,MAXLENGTH,stdin);
Stud_Names[i]=name;
printf("Enter student %d ID: ", b);
scanf("%s", &Stud_ID[i]);
printf("Enter student %d marks for module 1: ", b);
scanf("%f", &Stud_Marks_Mod1[i]);
printf("Enter student %d marks for module 2: ", b);
scanf("%f", &Stud_Marks_Mod2[i]);
printf("Enter student %d marks for module 3: ", b);
scanf("%f", &Stud_Marks_Mod3[i]);
sum = Stud_Marks_Mod1[i] + Stud_Marks_Mod2[i] + Stud_Marks_Mod3[i];
avg1=sum/3;
printf("Average of student %d mark is %.2f \n",i+1,avg1);
b++;
}
if(avg1>max)
{
printf("Highest average mark is %.2f ", avg1);
}
return 0;
}
The problem is that Stud_Names[100] is an array of char and not an array of strings.
If you go to the second name you just start from the second character in the name and then you also delete the first name.
You should define
char Stud_Names[100][MAXLENGTH];
and in the cycle
scanf("%s", &Stud_Names[i])
the same for stud_ID and all the strings.
Why do you do use structure? It is a better way to organize the data.
typedef struct
{
char name[MAXLENGTH];
char ID[MAXLENGTH];
float mark1;
float mark2;
} STUDENT;
STUDENT math class[100];
Question
Want to know the real differences in term of practical use with examples other than typical.
A reserved word is part of the C/C++ language. A standard identifier is not. For instance, "return" is a reserved word. It is not something you can ever redefine. You cannot have a variable named "return", ever. The word is part of the language. On the other hand, "cout" would be a standard identifier. It is "standard" in the sense that it is part of the C++ Standard Library. But you are under no obligation to use the Standard Library, or indeed, any library. Libraries are not part of the language proper. And you are free to define a variable in your program named "cout" and do something useful with it. Here is a simple example (indentation will probably be messed up by RG):
#include <iostream>
using namespace std;
int main()
{
int i;
i = 123;
{
// int return;  // This will never work...
int cout = i * i;
i = cout - 10000;
}
cout << "i is now " << i << endl;
}
Question
I have a 15*15 banded symmetric matrix, whose inverse is to be done using symbolic math. If there is any proof that for inverse of a symmetric matrix with fixed bandwidth, elements that were zero in original matrix will remain zero in inverse also?
Question
As V. Toth mentioned there is no real difference in how you open a file. There is a difference, though, in how you store data into a file. Just take a floating point number as an example. A single precision floating point number takes up 32 bit in memory. If you store this as binary representation into a file it will take 32 bit. However, assuming 8 bits for a character in a text file we could only store 4 digits in the same space, e.g. 2.01 or 5E-3. If you want a higher precision stored in a text file you will need more space than in binary representation. Usually, you will cut of the numbers at one point to save storage space. But, no matter how many digits you store for most numbers you will loose precision due to the conversion between the binary representation in memory and the decimal representation in the text file.
Coming back to the original question: Whenever you store numbers in a text file you will probably convert them back into their integer or floating point representation while reading them in. If you just store the numbers in memory after reading them in then there is no difference between using binary or text files. However, if you hold a representation of the full file in memory a binary representation of your data takes up less space than the textual representation. Which type of file format you choose depends on you needs: binary can only be read efficiently by the computer, whereas a textual representation is more suitable for humans to read.
Question
I want to draw some 2D patterns. It should work with VC++ or C++. I try GDI and GDI plus, but it does not work well when I want to color the new shapes. I want to draw some new shapes and color them as an independent unit.
I appreciate anyone who can give me some advice.
Even though the best choice in C/C++ is probably OpenVG, also OpenGL libraries allow you to draw arbitrary polygons in a 2D fashion: all you have to do is to set a Orthographic instead of Perspective Projection matrix.
Nevertheless, my advice is to use a different (and simpler) language + library for this kind of tasks. For my 2D patterns drawing I rely on Python + pyGame library: excellent results with a few lines of code.
Question
I am looking for a transient finite element model that handles heat and mass (stable isotope) transport and nonisothermal fluid properties (thermal buoyancy driven flow). Preferably with text input and output files so I can link with UCODE. FEFlow, COMSOL, or TOUGH2 would work. I would love to learn TOUGH2, especially iTOUGH2, but I don't have any money or financial support and I have no government or academic affiliation at this time.
The code I used for my MS thesis in 2002 was written by Steven J Cook for his PhD dissertation in 1992. I recently tried to recompile the code but I couldn't find a compiler that works. Apparently there have been some significant changes to the C language (written in C90?) and getting the code compiled is beyond my abilities.
I would be using the numerical model (or compiler) to improve and publish my MS work (12 years later!).
Any help with compiling suggestions or free numerical modeling programs would be very much appreciated!
You might also consider using OpenGeoSys http://www.opengeosys.org/
It is Open Source, it has all the capabilities you need and you should be able to use it with and without GUI on different platforms.
Question
I am currently working on ECDH.
For some other implementations, take a look at curve25519 (http://cr.yp.to/ecdh.html) and the RELIC toolkit (code.google.com/p/relic-toolkit).
Question
I need some help with matlab and FIS.
I need to create a matlab m file that generate a FIS base on the following matrices:
c= 0.5473 0.4099
1.5829 1.9309
1.4429 1.9982
0.3584 -1.1428
0.9137 0.4431
0.6094 0.5231
0.4707 0.6176
sigma=0.2433 0.1866
1.6739 2.6266
2.4323 3.6111
2.3515 1.0361
0.1980 0.5069
0.5500 0.5236
2.0594 0.3309
where :
-c and sigma are the parameters for the gaussians membership functions.
- number of columns = number of inputs
I have been trying with the genfis, but I do not know how to create the Xin and Xout matrices.
Could anyone help me?
Question
A pseudo code for any simple vector calculation will be very helpful.
OK. Suppose there are two vectors represented using one dimensional array as a[] and b[] having length l. Then the dot product of vector a and b in C language can be represented as:
double product=0.0
for (int i=0; i<l; i++)
product = product + a[i] * b[i];
As dot-product is the sum of product of the corresponding entries of the two vectors and returns a single number.
As per as initialization is concerned it will be same as we do for array:
Initialize length of array as length of vector. Then initialize individual array element as individual vector component
You can apply the rules of initialization, addition and multiplication of two vectors on array once you understand the working/handling array. More if you know you can define a structure having array representing vector. And can use this structure any time to define vector in a program.
Question
I have been asked to submit an innovative assignment, self programmed and using any programming language. But I have competence in Python and C language.So please help me through.
Thanks
What is the idea you have? You have mentioned nothing in your question.
Question
I need a program to read a read the SWH_T value from my text file and calculate the average of SWH_T value and save the output in another text file. Can anyone suggest me the code? I am attaching you the file for reference. I have monthly file of Jan_txt , feb_txt till dec_txt of each year, I need a program for that.
Amen to that, @mark, modulo a missing "not" in your message.
Averaging numbers taken from a file must be exercise two in most beginning programming classes :-(. The interest in the program seemed to be something like reading from the fifth white-space separated column, for which C++ programmers seem to have a neat C++ pattern code ready to go with. It's not a research topic.
Question
I work with some codes (with xgrafix) that generated dump files having all the information of simulation. The code can read the dump file itself. But is there any way to read it externally using some editor? N.B. gedit/vi failed to read it.
My guess would be that these are restart dump files, made so that you can restart the simulation. In this case they will probably be in an undocumented format that is only known by the progam's designers and maintainers. Possibly there are even sections of binary data that are simply be rolled into the memory space of the running program.
Why do you need to read the dump anyway? Can't you reload it and get the info you need from within the parent program? If the program is open source you might be able to cannibalise it to just run a restore and then output the data that you are interested in in a more accessible format.
Question
I want to generate graph for the simulation result of a my routing scheme using C.
You can write your data in "DOT" file which is a graph description language, in plain text format.
Then you can use graphviz to generate ps, svg, pdf ... files. With graphviz you can control the layout and methods of drawing the graph.
Question
I would like to develop diagnosis application on android, but I do not know if it is feasible to realize that with Java / C + + and Matlab.
You can use a Java within Matlab. The link below describes this approach:
Besides use OpenCV and Android NDK tool, you can implement your own algorithms with the use of RenderScript in Android API:
Question
Theses days, I am doing literature review about the new trends in control of nonlinear systems, but it looks like this is a very broad subject. What are the new research areas in the control of such systems?
Wang, T., Zhang, Y., Qiu, J., & Gao, H. (2015). Adaptive fuzzy backstepping control for a class of nonlinear systems with sampled and delayed measurements. Fuzzy Systems, IEEE Transactions on, 23(2), 302-312.
Liu, T., & Jiang, Z. P. (2015). Event-based control of nonlinear systems with partial state and output feedback. Automatica, 53, 10-22.
Chen, W., Sun, J., Chen, C., & Chen, J. (2015). Adaptive control of a class of nonlinear systems using multiple models with smooth controller. International Journal of Robust and Nonlinear Control, 25(6), 865-877.
Aranovskiy, S., Ortega, R., & Cisneros, R. (2015). Robust PI Passivity-based Control of Nonlinear Systems: Application to Port-Hamiltonian Systems and Temperature Regulation. arXiv preprint arXiv:1503.02935.
How can I make a C program which first reads from keyboard and then from pipe?
Question
Hi all, I intend to write a C program which first reads from keyboard then form pipe. Suppose the program is called myProgram. The program run as following: program1 | myprogram program1 writes something in stdout by printf(). Best regards,
Windows or Linux? For Linux just use "select" [1]. An other approach would be threading or using directly Linux pipes within your application [2]. [1] http://linux.die.net/man/2/select [2] http://man7.org/linux/man-pages/man2/pipe.2.html
Question
How much time will take to evaporate 100 ml supernatant broth+100 ml methanol..at 65 C.
Also it matters if its under some sort of vaccum pressure or not.
Question
The authors of the expression you are inquiring about probably meant to say that a sample was stored / should be stored in a dinitrogen atmosphere and put in a freezer at -20°C. But this is hard to say as the expression is poorly phrased.
Question
Can you suggest sources available for research on the novel?
Question
We all are familiar with the effects of carbon dioxide on our environment. Carbon dioxide is responsible for causing the greenhouse effect. I would like to reduce greenhouse gas in environment, and what is the process of CO2 splitting
Question
A somewhat naïve researcher attempts to estimate an aggregate consumption model for US economy.  He estimate the regression C = ßo + ß1Yd + ß2S + U, where C is consumption, Yd is disposable income and S is saving.
a.  Explain how good a fit is this researcher likely to get when the equation is run?
b.  What type of statistical errors may the model suffer from?
c.  Can you generalize your conclusion?
Question
hi I need how to convert matlab code to c code
Question
I have checked a injection molded PP sample by DMA which was first cooled to -100°C then heated to 150°C and cooled again to -100°C at 1°C/min heating and cooling rate and at 1 Hz. In the heating step I can see a Tg of roughly 2-7°C (in tanDelta) while in the cooling step right after I find a much lower Tg between -8°C - -4°C. My question now would be: is this change due to a change in properties (crystallinity, etc) of the PP at high temperature or is it normal that you have a shift in Tg depending if you analyse heating or cooling? Is it because of disordering or thermal history of PP sample during heating? Or it is only because of slippage occurred during DMA analysis?
Alex is right to say that heating something above Tg erases part of its thermal history.  Tg is a function of heating/cooling rate.  If a sample is cooled quickly, the polymer chains have less time to rearrange themselves and the glass transition occurs at a higher temperature; the result is a glass with a high free volume.  If a sample is cooled slowly, the polymer chains have more time to move and rearrange themselves even at lower temperatures; the glass transition temperature is pushed lower and the result is a glass with less free volume.  The equivalent effects are seen on heating.
If your Tg measured on heating is higher than that measured on cooling, the implication is that the cooling rate you're using in your DMA experiments is slower than the rate at which the component was cooled during manufacture.  You could possibly confirm this by doing another DMA run on the same sample; the second time, you should see the lower Tg on both heating and cooling steps (assuming the heating and cooling rates are the same!)
For further information, see any good text book on polymer science or (for example) the paper by Moynihan et al., J. Phys. Chem. 78 26 (1974).
Question
Following Errors get displayed after compiling:
1.cannot find -lobjc
2.id returned exit 1 status
#include "stdafx.h"
#include<windows.h>
#include<winsock.h>
#define NETWORK_ERROR -1
#define NETWORK_OK 0
int APIENTRY WinMin(HINSTANCE hInstance,HINSTANCE hPrevInstance,LPSTR IpCmdLine,int nCmdShow)
{
int nret;
// Initialize Winsock
WSAStartup(0x0101,&ws);
//Create the socket
SOCKET commsocket;
commsocket=socket(AF_INET,SOCK_STREAM,IPPROTO_TCP);
if(commsocket==INVALID_SOCKET)
{
MessageBox(0,"Could not create comm. socket","Error",0);
WSACleanup();
#include "stdafx.h"
#include<windows.h>
#include<winsock.h>
#define NETWORK_ERROR -1
#define NETWORK_OK 0
int APIENTRY WinMin(HINSTANCE hInstance,HINSTANCE hPrevInstance,LPSTR IpCmdLine,int nCmdShow)
{
int nret;
// Initialize Winsock
WSAStartup(0x0101,&ws);
//Create the socket
SOCKET commsocket;
commsocket=socket(AF_INET,SOCK_STREAM,IPPROTO_TCP);
if(commsocket==INVALID_SOCKET)
{
MessageBox(0,"Could not create comm. socket","Error",0);
WSACleanup();
...................
Also, since you are on WIndows, are you using/trying to use a program that makes use of MFC with a compiler/IDE that doesn't come with MFC ?
Question
Any implementation available in parallel fuzzy c means??
Question
I am trying to solve an N-S equation using a projection method. I know sparse matrices are used in Matlab to solve Pressure Poisson equations but I don't know how. I want to implement the code in C and finally in CUDA.
The best way to solve Poisson-like problems is to download publicly available algebraic multigrid code (there is a choice of sequential/parallel codes written in either Fortran or C) and to use it as a preconditioner for CG method (again this can be downloaded as a library code).
Question
Temperature in the chamber is 1600℃。
Question
A lot of debate takes place on what programming languages and their use when implementing numerical methods especially when it comes to problems that require the handling of large number of computations. Like in the case of FEM where professional programmers and scientists argue whether to use C, C++, Fortran, Matlab or other programming tools, in order to develop fast and accurate performing software applications. Do you believe that by using a fast language is the solution in developing a fast and reliable software application or the algorithmic implementation affects the most the final performance?
I think that the algorithm and its implementation are far more important than the programming language. Let me give an example of why I think that is. Some time ago I needed an implementation of a so-called cell tree. I got the implementation from a peer who already implemented it according to the paper. I wasn't satisfied with the initial performance of the implementation: It took 10 minutes to build the initial cell tree. So, I threw in some profiling to figure out the reason for this. First thing I did was replacing the STL vector (C++) with a QVector from the Qt library. Now, I was down to 3 minutes. Still not satisfied with the performance I tackled the implementation myself and improved some places where I thought there were some stupid choices. This brought me down to 1 minute. Finally, I got the original source code from the inventor of the cell tree. He had another trivial optimization which prevented multiple allocations/deallocations. Applying this trick to my source code brought me down to about 10 seconds.
This shows that not only the algorithm is important, but also its implementation. During the whole optimization the algorithm never changed, but its implementation did. So, the question is which programming language supports you in writing an efficient implementation of the algorithm. I've recently watched a video about Python for Fortran programmers. In that talk the solutions to the same problem written in both languages have been compared. Due to the lack of suitable libraries for Fortran the algorithm (and not only its implementation) was slower for Fortran than for Python. Not to mention that the Python code was a lot shorter. (For small input sizes Fortran was still faster.)
For an expert programmer, probably a computer scientist, the algorithm is far more important. Sometimes it is also important to choose between compiled and interpreted languages. However, for the average programmer the language is more important since it will define what algorithm you choose. As long as you use libraries with advanced data types it is easier to stumble on an efficient solution to your problem.
I also have to agree with Xinfu. From my experience I can tell that when you rewrite your code a second time it will always be cleaner and faster. Thus, using a scripting language for prototyping and a compiled language for the final implementation can have a great advantage.
Question
Vibation analyzer in Measurement, Diagnosis and Speech Recognition use a state machine for event-driven control. Which programming language is the best for a software-state-machine in event-driven audiosignalprocessor?
@Christian E. Jacob  1-2- years ago I have implemented a state-machine with C++ with support for most of the UML 2.4. features...
I case of any problems just write me an email    mail@christianbenjaminries.de
The sources are currently not published on Sourceforge (only binaries), just write me an email and I will send you the sources.
UML 2 Statemachine for C++ Code Generator
What is UML 2 Statemachine for C++ Code Generator?
UML 2 Statemachine Code Generator is a developer framework for an easy implementation of statemachine based applications. Here, with this framework only one Domain-specific language (DSL) specification is necessary to create executable codes for Linux, Mac OS X, and Microsoft Windows. With this framework you save a lot of time and effort during implementation; also you have an always valid - based on a well-defined C++ standard - generated code with high quality.
Features
Current stable version is 1.12; used by some customers!
Based on UML 2.4 Superstructure specification
A commercial-grade cross-platform Harel UML 2 Statecharts framework for
Linux (32-bit, 64-bit), Mac OS X, and Microsoft Windows 7 (32-bit, 64-bit)
Easy to use Domain-specific language (DSL)
Embed your C/C++ code within UML Statemachine's DSL
DSL parser is based on ANTLR 3 Parser Generator
External and internal event handling for all specified (sub-)transitions!
Supports thread based orthogonal execution of different state flows
Supports guarded transitions between states
Supports history states; resume on a specific state
Supports initial states, final states, terminate states, and entry-/exit states of regions
Supports large scale state machines with hundreds of states
Uses Transition control flows, no slow if-else/switch-case decision statements
Library based implementation with a well defined Application-programming interface (API)
Doxygen documentation of all API functions which are usable in any applications
All status messages can be redirect to your specified target
Syntax highlighting for gEdit
And many more...
Question
• C language is the subject of Computer and IT then what is the use of C language in Civil Engineering, Chemical Engineering, Mechanical Engineering, Eletrical Engineering etc. if you are an Engineering then please tell me use of C language in your branch atleast.........
More important is that you learn how to keep a final outcome in mind and write logical steps to achieve that and learn how to operate a computer.Moreover Computational tools are used in nearly  every engineering field.All the basic concept are definitely helpful to all branches though they are not implementing it directly.
Question
The observer-based sliding mode controller seems to be working in MATLAB simulink, but since I couldn't implement it real-time using MATLAB real-time workshop, I've written a piece of code in C trying to implement it through dSpace card. However, I am not able to stabilise the system with the same controller which works fine in Simulink. Does the controller have to be designed specifically in discrete-time or there is another problem?
Rgds
I have implemented the observer-based sliding mode controller for DC motor drive, using the Matlah/Simnlink. All blocks enable easy graphical
programming of the different control algorithms and at the end, with the Real-Time
Workshop and the Code Composer for the C3x4x
digital signal processors, the binary executable code
can be generated from the Sirnulink model and
downloaded to the DSP controller, it is executed in real time, similar to the dSPACE controller.
Difference between the programs written by simulink block and by C codes might be
evaluation of the integrator and derivative blocks?
and some times integrator reset, limiters cause this kind of instability
m.dal
Question
I’ve been trying to implement LQR with state-observer in real-time. Since, I couldn’t manage to implement it using MATLAB real-time workshop, I had to write the C code for LQR and state observer. However, it seems like the C code isn’t working at all and I can’t get my head around it. My first question is, when I design my state-feedback matrix and observer matrix in MATLAB before using it in my C code, do I have to design them (K and L) in discrete-time. I appreciate if someone can give me some advice how to implement LQR+observer in real-time using either MATLAB or C, as I don’t really know why I keep getting issues with implementation.
Best regards
Looks like you have been given some good advice already, just wanted to add a few suggestions.
There are typically two ways to implement a control law like this, directly as a discrete time filter, or as a continuous time filter which you solve using a numerical integration scheme, such as the Euler or Runge-Kutta method. The last method is what is standard in for example Simulink if you try to solve a continuous time filter there, and will typically be a more computationally demanding.
I would suggest using a discrete time implementation, but you have to make sure you are using gains that are suitable for discrete time, as already mentioned. Newer versions of MATLAB has the 'dlqr', 'lqrd', and 'kalmd' functions, that might be useful for this.
Also, you can run your C code as a so-called S-function inside Simulink. This can be very useful in debugging your code. Set up your model of the system as a regular LTI system block, and your LQR and observer implemented in C in an S-function block.
Hope this helps...
Question
Or If any one has C or C++ codes on LEACH they should help me. I have a project working on based energy performance analysis on WSns, that has to do LEACH cluster protocol. It will be good for me if anyone can enlighten me on how to simulate the LEACH routing protocol on OPNET simulator or send me C or C++ codes on LEACH.
Question
I have lots of experience using Java, C and FORTRAN for scientific programming. In Java I make heavy use of abstract classes, interfaces and generics to make my code as re-usable as possible. This has really cut down my development overhead without having too much impact on runtime. I have yet to experiment with hardware acceleration such as that offered by Cuda, which other use to good advantage, in my scientific programming (quantum dynamics of open/stochastic systems). Working mainly on Mac's the emergence of Swift and Metal provide new opportunities for scientific programming in a modern environment with fast execution.
I have started to play with Swift and it seems promising - with the playground looking like a potentially good teaching too for my students.
Especially from those who have experience using hardware acceleration and have benchmarked some relevant simple Swift+Metal code (e.g. matrix multiplication). I would be grateful for information of your experience or opinions on the future potential of Swift+Metal for high performance scientific simulation.
I don't have any experience with either Swift or Metal, but I already had a quick look at Swift as a programming language. My first impression is that Swift will be fast enough for scientific computing. In the keynote presentation they already showed that in some cases it can be faster than Objective-C. That is due to new language features that are more high-level, but because of this allow for better compiler optimizations. The underlying Objective-C runtime is already highly optimized and very fast. The only disadvantage I see is the lack of compatible scientific libraries written directly for Swift. It is possible to interface with C and C++, but programming can only be efficient in Swift if you have an interface optimized for the language. That means that you need to write your own wrapper classes for existing libraries.
I also had a quick look at the link you provided on Metal compute shaders. It looks all straight forward. Since Apple conceived OpenCL I guess they know what they are doing here. Though Apple has the best OpenCL wrappers around. It seems that similar wrappers are still missing for Metal. What I don't like about the compute shader kernels is that their syntax is closer to graphics shader programming than to regular function calls like in CUDA or OpenCL. If you already know OpenGL and GLSL this approach is easy to understand, but for the regular scientific programmer this concept is harder to grasp. You need to know a little bit about graphics hardware to understand the concepts. I don't expect Metal to be faster than OpenCL. Therefore, I suggest trying to use Swift with OpenCL in the hope that Apple's OpenCL wrappers can also be used with Swift.
Question
I need to read a file using C as a programming language and QT.
FILE FUNCTIONS
1. fopen() //for opening a file
FILE *fopen(const char *path, const char *mode);
The fopen() function is used to open a file and associates an I/O stream with it. This function takes two arguments. The first argument is a pointer to a string containing name of the file to be opened while the second argument is the mode in which the file is to be opened. The mode can be :
• ‘r’ : Open text file for reading. The stream is positioned at the beginning of the file.
• ‘r+’ : Open for reading and writing. The stream is positioned at the beginning of the file.
• ‘w’ : Truncate file to zero length or create text file for writing. The stream is positioned at the beginning of the file.
• ‘w+’ : Open for reading and writing. The file is created if it does not exist, otherwise it is truncated. The stream is positioned at the beginning of the file.
• ‘a’ : Open for appending (writing at end of file). The file is created if it does not exist. The stream is positioned at the end of the file.
• ‘a+’ : Open for reading and appending (writing at end of file). The file is created if it does not exist. The initial file position for reading is at the beginning of the file, but output is always appended to the end of the file.
The fopen() function returns a FILE stream pointer on success while it returns NULL in case of a failure.
The functions fread/fwrite are used for reading/writing data from/to the file opened by fopen function. These functions accept three arguments. The first argument is a pointer to buffer used for reading/writing the data. The data read/written is in the form of ‘nmemb’ elements each ‘size’ bytes long.
In case of success, fread/fwrite return the number of bytes actually read/written from/to the stream opened by fopen function. In case of failure, a lesser number of byes (then requested to read/write) is returned.
Other function pairs for reading/writing files include the following, which are not discussed in the example program.
• fgets()/fputs()
• fgetc()/fputc()
• fscanf()/fprintf()
3. fseek() //used to manipulate the position of the pointer within the file, once the file has been opened
int fseek(FILE *stream, long offset, int whence);
The fseek() function is used to set the file position indicator for the stream to a new position. This function accepts three arguments. The first argument is the FILE stream pointer returned by the fopen() function. The second argument ‘offset’ tells the amount of bytes to seek. The third argument ‘whence’ tells from where the seek of ‘offset’ number of bytes is to be done. The available values for whence are SEEK_SET, SEEK_CUR, or SEEK_END. These three values (in order) depict the start of the file, the current position and the end of the file.
Upon success, this function returns 0, otherwise it returns -1.
4. fclose() //used to close the handle to the file
int fclose(FILE *fp);
The fclose() function first flushes the stream opened by fopen() and then closes the underlying descriptor. Upon successful completion this function returns 0 else end of file (eof) is returned. In case of failure, if the stream is accessed further then the behavior remains undefined.
5. Example Code
#include<stdio.h>
#include<string.h>
#define SIZE 1
#define NUMELEM 5
int main(void)
{
FILE* fd = NULL; // Declaring a file pointer, will be used later as a handle to the desired file
char buff[100];
memset(buff,0,sizeof(buff));
fd = fopen("test.txt","rw+");//Opening a file using the handle described previously. Assumes file "test.txt" exists in the same directory from where the executable will run. "rw+" defines the opening mode
if(NULL == fd) //file pointer returns NULL, when the file cannot be opened using the specified mode
{
printf("\n fopen() Error!!!\n");
return 1;
}
printf("\n File opened successfully through fopen()\n");
if(SIZE*NUMELEM != fread(buff,SIZE,NUMELEM,fd)) // Checking for successful read, you can use other conditions to check or other functions to read
{
return 1;
}
printf("\n The bytes read are [%s]\n",buff);
if(0 != fseek(fd,11,SEEK_CUR)) //Manipulating the pointer to some other location within the file
{
printf("\n fseek() failed\n");
return 1;
}
printf("\n fseek() successful\n");
if(SIZE*NUMELEM != fwrite(buff,SIZE,strlen(buff),fd)) // Checking for successful write, you can use other conditions or other functions to write
{
printf("\n fwrite() failed\n");
return 1;
}
printf("\n fwrite() successful, data written to text file\n");
fclose(fd); //Close the File Handle to close the actual file
printf("\n File stream closed through fclose()\n");
return 0;
}
>>>>Initially the content in file is :
\$ cat test.txt
hello everybody
>>>>Executing The Code
\$ ./fileHandling
>>>>Output
File opened successfully through fopen()
fseek() successful
fwrite() successful, data written to text file
File stream closed through fclose()
>>>>Again check the contents of the file test.txt. As you see below, the content of the file was modified.
\$ cat test.txt
hello everybody
hello
Question
Which algorithms would you recommend for optimizing a real-valued unconstrained unimodal function? Consider that the objective function could be non-separable, ill-conditioned and high dimensional. References to good implementations in MATLAB, C or Python will be greatly appreciated.
Hi Everyone,
Thank you VERY much for all the answers! Here are some PRELIMINARY results. I will present more results as I get or implement more algorithms... The presented results show the convergence curves for Nelder-Mead, Genetic Algorithm, PSO and CMA-ES on the unimodal Sphere and Elliptic functions of the Large Scale Global Optimization benchmark. These are high-dimensional, ill-conditioned and non-separable functions (see the attached link for more information).
As it can be noticed Nelder-Mead shows a very slow convergence. We have obtain very good results with hybrid algorithms using Nelder-Mead on small dimensional molecular docking problems (results will be published soon ), but as expected, in high-dimensional search spaces Nelder-Mead's simplex collapses too fast. GA and PSO are somehow better, but still away from CMA-ES. However, even CMA-ES need over 1.000.000 function evaluations to reach solutions near the optimum (which is too much for hybridizing and restarts).
I'll keep testing more algorithms... some of your suggestions seem very promising. It would be great if you could provide a MATLAB code :)
Question
I'm trying to figure out what library is the most accurate to date. I use in my codes GSL (GNU Scientific Library) which the most complete I found so far. Do you prefer something else, especially for special functions like Bessel and Neumann? Those are the ones I'm most interested in.
I am mostly using libraries for linear algebra, e.g. Armadillo or Eigenlib. In general I like to use the Boost library for many different things. They also have Bessel functions. They tend to have a very high accuracy. Just have a look at their comparison to GSL:
Question
see above
Scalar variables and velocities are defined at cell center and
cell faces, respectively in an control volume
Question
A simple subroutine which gives a data file and produces a simple plot. I want to use it in more complex programs for visualization of outputs.
There are many, many applications specialized at visualization of data. However, if program development goes parallel with generation and inspection of data it is often advantageous to have the visualization tools directly available as C++ classes. Then adding a few lines of debug code lets data be generated under visual control that otherwise would be generated 'in the dark'. This is the way I work since more than 20 years and which led to the C++ class system which I call C+- (since it makes active use only of a subset of the rather complex C++ syntax). C+- can freely be downloaded from my website, together with examples and documentation. Feel free to ask for help if you consider using this tool.
Question
Same code, runs well on Ubuntu but runs strangely on SunOS. It can't read data from a buffer or read out garbage character. Why is that?
Have you checked that the compiler is the same version on both systems?
Question
I have written an application on a Android platform. In this app, I'm using OpenGL ES2 and some C++ code with Android NDK. Now, I want to port this app to iOS (iPhone, iPad) platform. I find that IKVM/ MonoTouch or j2ObjC can convert Java application into iOS application. However, in this program, some C++ code and OpenGL are used, so I don't know how to port my program to iOS platform.
Can you help me deal with this problem?
One option to consider is to use web-based applications. In that case you rely on HTML5 and JavaScrips, which similarly work on both iOS and Android. You can further improve the apps with use of JQueryMobile and similar JS libraries. Once the app is finished you have to make them work in the full-screen, and differently package/organize them for Android or Apple store. Of course, it depends on what the purpose and desired functionality is. I hope this helps.
Question
I want to calculate variable dependency in program using python and pyparsing.
It may be a bit off-topic, but if you use CLang/LLVM this may help you http://clang.llvm.org/doxygen/DependencyGraph_8cpp_source.html
It would not be hard to interface with Python.
free PIC microcontroller C compiler
Question
I want to know a really free PIC microcontroller C compiler and its IDE, do you know some one?
There is nothing much free for PIC controllers....mostly used IDE's are MPlab from microchip which has some code limit in it as well you also have Mikroc pro from MikroElectronika which is also having some code limit in it. It also has many built-in libraries which will be useful for a good and easy kick start. below are the links for both the IDE's http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en019469 --> for MPlab http://www.mikroe.com/mikroc/pic/ ---> Mikro C ide
Question
For software architecture reasons I can't currently control, we are acquiring data through a singleton library which is not thread-safe, nor can you run multiple instances of it for the same experiment. There is only one single valid instance of the source data structure at any one time. Therefore, our current design has a master thread copying that data to newly allocated internal data structures in our post-processing system, and sending them off to worker threads.
What I would like is to have a parallel version of the default memcpy call, that would hand off the copying task to an internally managed threadpool, ideally with both synchronous and asynchronous versions. Since one serious bottleneck right now is all tasks that bog the master thread, distributing the copying work would make sense. Naturally, we would need to synchronize before calling the "next event" method on our library API again. The task is not trivial to parallelize, though, since different architectures have different memory controller layouts, where you could even run into cache starvation issues etc if multiple threads access ranges nearby. Some kind of auto-tuning or at least architecture-aware library would be ideal.
Some suggestions for where to look would be appreciated. Currently, this code is using Boost, but e.g. no BLAS or LAPACK implementation, so even adding linking to those would increase the maintenance and portability burden.
Some years ago I worked with the "Intel integrated performance primitives" library (commercial). Originally, it was targeted at image and signal processing. However, it should offer a parallelized version of memcpy as e.g. ippiCopy_XYZ. This family of functions should perform quite well, especially on Intel hardware. Maybe the calling arguments suit your actual application. It also offers some control on how many threads to use.
Question
I have implemented and debugged my idea in JM software (an open source video codec in C) with visual studio. Now I need to run this several time to get desired results, but realized the that the debug mode results and release mode are different. What can I do? Which are correct and why did such a problem occur?
I executed my code in different visual studios and got different results. Now I'm really confused and doesn't know what is the problem
I think this is happening due to time dependency present in your implementation. Code in debug mode runs slower compared to release mode. Also, different versions of visual studio use different versions of framework and library (generally incremental) and hence you may be getting different results.
In Debug mode, lot of debug related information is present in final executable of the program to facilitate debugging (i.e. knowing variable values, seeing stack and automatic variables, step through each line of code and see results etc.). This makes program run slower. In release mode (configuration), the debug information is not present and your code runs straightforward.
Debug config is useful in early stages of program development and verification of its correctness. Once it is debugged for errors if any, then release mode executable is produced and shipped (or in technical terms released). Hence debug version of executable is for programmers/developers while release version of code/executable is for the users.
Question
I have a code of 1800 hundred lines and about 90 variables. Is there a limit on the maximum number of variables a C program can hold?
I assume you're not talking about the number of parameters you can pass through a function call. If that's so, the answer to your question is yes. But, not in the way you might think. The limit is the amount of available memory storage available. If we're only talking about RAM, that would be something less than 4GB on a 32 bit machine due to address space limitations. If it's a 64 bit machine, it's 16 hexabytes. Of course, most computers don't have 16 hexabytes (2^64) of RAM installed. So, you're limited to whatever is installed. On the other hand, if you wish to think as disk storage as a place to put a "variable", the limit is even further away. In Windows, for example, file sizes are limited to 2^128 (ie: 128 bit addresses). Now that's a big number indeed, about 34 followed by 37 zeros. And, if I"m not mistaken, even this limit has been doubled in the very latest file systems.
To the best of my knowledge, there's no maximum number of variables (ie: named storage locations) inherent in the C language itself.
Question
I am working on finding visualized flow graph of c program for that I want to get basic block form llvm and Clang.
I have an update for the previous answer. It is possible to generate the CFG for the LLVM code generated from Clang (not directly the C source).
For this, you have to install clang and opt and execute them in sequence, e.g. the following two commands (note that I have LLVM-3.1 installed):
> clang file-c -S -emit-llvm
> opt-mp-3.1 --dot-cfg file.s
Writing 'cfg.fun1.dot'...
Writing 'cfg.fun2.dot'...
etc.
(this is the output from opt).
Then you may generate a graphics file from the dot file as usual using dot:
dot -Tpng cfg.fun1.dot > cfg.fun1.png
Attached to the answer is an example of output.
If you want a CFG of the original C code, then you probably have to find something else or implement your own tool :) By the way it seems that Eclipse may have some bundle to do this for C/C++ programs. Good luck!
Question
A written program in every programming language (here in C#) may have unnecessary executable scripts (extra lines of codes, statements, expressions, loops and -not comments-). How can I know this overtime quickly and how to prevent it?
As far as I know Resharper or FxCop or StyleCop can recognize
unused code in Csharp programs. But sometimes it's
a good idea to have unused functions, for example,
if it is not known yet whether the function will be used
later or not. If you comment it out, it's not compiled.
Regards,
Joachim
Question
I want to create a project which analyzes the source code of the program and gives analysis related results.
English is not my mother tong, may be that's the reason why I do not understand, what language you want to use and what language you want to analyze, as well as where you want to add the analysis pass...
Anyway, if you want to analyze Python code, I'd suggest to write also in Python, and start from looking at already existing analyzers. My favorite is PyLint ( http://www.pylint.org/ ), but many more are available. They are often used in continuous integration and IDEs, if that is where you want the pass to be.
Question
"source code", not book/library or other things.
The matrix A is a result of such computations, and must be zero, we do like the matrix to be sparse !
we're not going to manipulate matrix, we're going to find the best method to solve it .
I just need a subroutine that can solve it !
Question
How to uniquely identify the compact disk in windows c++?
Audio CDs don't have "audio files", they have a TOC with track information (position, length), which can be accessed only through low-level drivers or higher level wrappers. The ".cda files" you see in Windows are just a wrapper provided by the OS for convenience, but they don't contain actual data, nor do they link to the actual audio data. Still, you can use this information for calculating the above information (and use it to query services like e.g. CDDB).
Packet-written CDs however have other variables as well, e.g. DELETED files, so you must take that into account too. E.g. two CDs with 5000 same-size files but with one of them containing a "deleted" entry, are to be considered "the same" CD?
I suggest you read the online CD Recordable FAQ for more details about how CD-Audio, CD-ROM, packet-writing etc. work.
Question
I found bouncy castle, is that preferable?
This is an old questions, but I will bite anyway.
"Best" is always a subjective judgement, and in my answer I am not going to take a position and to declare a "winner". Let's just say that "best" is what gives you the results you regard as superior, and you need to find this out by trying all the alternatives and comparing the results. So if you were hoping to start a flame war, or to get a canned reply, tough luck.
I would, first of all, consider the cryptography library already available in .NET (in System.Security.Cryptography). Now that Microsoft has released most of the .NET source code, only paranoid programmers should object to trusting .NET on the basis of hypothetical backdoors and weaknesses planted into the source code by secret services. Unless you need functionality that is not part of the .NET library, I see no real need to look for a third-party product, and there are many reasons why you should not develop your own crypto API. Also, before wasting time with the internal workings of the Microsoft Crypto API 1.0, google "Cryptography API: Next Generation".
A good part of the .NET System.Security.Cryptography namespace is already available in Mono, but not all. Depending on what you need, you might be able to use Mono instead of .NET.
Bouncy Castle has been criticized by some as being a museum of old algorithms, several of them of dubious practical usefulness today. Nonetheless, it seems to be popular among some programmers, especially those who don't know .NET well enough to be aware of what it offers.
SecurityDriven.NET/inferno is another possibility. I know very little about it, however, it is so small that it is likely a simplified wrapper of a selected part of the .NET crypto functionality. If you like it, use it. If you don't, use the .NET functionality directly.
Question
Are there any tools dedicated to this?
That depends on your knowledge about OpenCL Programming The OpenCL architecture is clean, usable and user-friendly. Unfortunately, the current OpenCL implementation is bad. I hope this will improve in the future, and as more and more people start using it, the bugs will eventually be reported, triaged, and fixed. As the number of people that buy a particular card for its OpenCL capabilities increase, for instance because games start using OpenCL as a way to speed up certain physics calculations, the makers of these cards will eventually be forced to fix the bugs and make development easier. Until then, we might have to keep on suffering for the speedups we get.
The Intel® Xeon Phi™ coprocessor is architecture built for discovery, with superior energy-efficient parallel performance ideal for many real-world applications, including meteorology, medical research, 3D design, and engineering.
********** Getting Started with Intel® SDK for OpenCL* Applications ***********
* OpenCL* 1.2 Features for Intel® Xeon® Processor and Intel® Xeon Phi™ Coprocessor
* Developers can apply today to participate in the upcoming new Beta program for the Intel® SDK for OpenCL* Applications XE for Linux* OSs at: http://software.intel.com/en-us/vcsource/tools/opencl-sdk-xe
* About Intel SDK for OpenCL Applications XE 2013 Beta with Support for Intel® Xeon Phi™ Coprocessor
Question
I have different frames of images from a video file. Now, on observing each frames it is noticed that there are many frames in which the objects has not moved. I need to do averaging of all those frames and form a single frame image using OpenCV. Can someone help?
Happy to help. :)
Might not be exact answer for you. But surely would clear hurdles.
Assuming in case of surveillance; where only a or few person(s) is/are searched in an image frame!
If you are concentrating on determining whether a person is there or not. Look for GPU HOG Pedestrian detection OpenCV_v2.2 + ( EMGU Link: http://www.emgu.com/wiki/index.php/Pedestrian_Detection_in_CSharp ) .
Also a class specially for surveillance is there in EMGU.
For other objects you can look for custom haar trained xmls. Tools come with OpenCV itself. Haars are very fast.
Or you can learn about contours. Then you should work on background subtraction.
Note: EMGU is just a .Net wrapper over OpenCV.
Look for these keywords and "Image Segmentation", GrabCut (Automatic BG, FG filter) etc .
Question
What kind of application mostly use these languages ?
C is a fantastic high-level language, the whole language. It is very productive, fast, has great performance everywhere, a large user community, a culture is highly professional and it is really honest about its advantages and disadvantages. C is the closest to the hardware.
Several programming languages are C derivatives as Java, C + +, C #, PHP, Python, etc.. which can be faster, both in learning and developing, but eventually, when performance and reliability are important, C saves time and headaches.
To learn how to program in C I recommend starting by the basics of computation. Computation is much more than Computer programming. I suggest review types of Algorithms, then learn the principles and apply the C language syntax and structured programming. This yields the necessary abstraction layers to understand the more complex programming paradigms, but moderns, versatile and with minimum errors, such as Object Oriented Programming (C + +, C # and Java).
The C language is simple and expressive, whose syntax and semantics are incredibly powerful and expressive. It makes easier to think about high-level algorithms and low-level hardware just on time. C has code and simple types, the optimization is straightforward and simple. "Portability is a result of some concepts and complete definition."
In addition, you can use C code in standalone executables using compiled functions, scripting languages, code for the kernel, embedded code as a DLL, including they can called from SQL. It is a frank language and systems programming libraries incorporating. Yes, if you requires write something once and have it useful for most environments and possible cases of use, C is the only possible option.
C has flaws!
If, C has many "errors". No bounds checking, corrupts memory easily, using pointers, memory leaks and / or resources, concurrency, no modules or namespaces. Error handling can be uncomfortable and very detailed. It's easy to make a whole list of errors where the call stack fault and harmful entries from process.
The errors of C are well known and this is a virtue. All programming languages and their implementations have traps and blocking issues. C is ahead of it. And there are a number of static and run-time tools to help you try with the most common and dangerous mistakes. What some of the most used and reliable in the world are based on C is a proof that the errors are exaggerated and easily detect and solved.
In summary, the importance of learning to program in C and gives a great insight into programming and facilitates learning another language. It's like learning math thoroughly, then creativity is amazing.
Question
I need help with MS Visual studio 2012. Im not able to run following program:
#include <stdio.h>
void main()
{
printf("test2");
getch();
}
If you want to use getch(), include: #include <conio.h>
Question
As a teaching assistant, I'm involved in creating exercises for a master level security course. My goal is to teach practical aspects of security. As we are currently discussing software security, I think it is interesting to go somewhat into software verification. In my studies, I've personally had some interesting encounters with several static code verifiers (ESC/Java for Java, PREfast for C). However, I'm wondering if there are more actively developed tools available by now. I've found Microsoft's VCC [2], and a few others, like Mozilla's Pork, but neither seem particularly focused on security. Does anyone have interesting projects to share?
[2] http://vcc.codeplex.com/ (reference updated as per Ernie Cohen's answer)
Hi Rens,
VCC is focused on software correctness, one aspect of which is security. Most practical security problems are really problems of program correctness (buffer overruns being the best known example); if you don't prove things like memory and concurrency safety, anything else you "prove" about the system is moot. Moreover, one typical way to prove security of a program is to prove that it correctly implements a deterministic specification.
There has also been some security-specific verification work with VCC. In particular, there has been work on verifying software that makes essential use of cryptography, as well as some work on proving information flow for C code. You can contact me out-of-band if you want more details.
cheers,
ernie
PS the proper reference for VCC is vcc.codeplex.com.
Question
I'm trying to implement a real time system to sense and avoid, I have been given a robot to program, but it only has a single camera with out the capability for modifications. This system is over a wifi network and must be in real time. Any suggestions?
Question
As the title
Let's modify your question slightly: "Is it difficult to make MIPs for target molecules that are selective, robust, and cost effective?" The answer to that question is a very big YES! If it were easy, there would be a flood of products on the market. For example, I know a company whose charter is to develop MIPs to use as detectors of rather low concentrations of specific molecules that are produced in the body that indicate specific disease conditions while they are in an early stage and treatable. Also, some types of MIPs are being used to detect explosive compounds in soil and water, such as in areas of mine fields and abandoned industrial sites that made munitions for military use. A big problem with explosives is getting discrimination between the various types. You'll find many papers on this if you search for TNT, DNT, RDX, etc.
The "quick-and-dirty" method is to mix your target molecule into a reaction mixture and form a polymer, then wash out the target molecule and you have your MIPs. This will leave some of the target molecules locked in the polymer no matter how much washing you do. When designing your system, you need to consider how you will be detecting that the target molecule is in place in its cavity. Then you'll need to run several possible interfering molecules to see if they also give a response, and also if they fill the cavities and block your target molecule. Also, how long does it take from the time the MIP is exposed to the test material until a response is generated and stabilized.
Question
Monte Carlo Integration is a good way of resolving high dimension integration problems and some open source libraries Cuba, HIntLib, dvegas have the MC integration algorithms. However, it seems that they cannot be compiled well in visual studio environments. Does anyone have suggestions on this topic?
I have no personal experience with QuantLib (www.quantlib.org), but I assume that since it supports various Microsoft languages, that it would have everything necessary for Visual Studio. The other alternative is *not* to use Visual Studio, but instead use command line invocation of the compiler - using Makefiles and GNUMake.
Question
How can cuda be integrated with code blocks.
Did you try set nVidia's library and include files in the Code Block or project property? By the way we must remenber that who compile the CUDA source code is NVCC, not GCC.
Question
It's a global problem in any language in any programming Integrated Development Environment.
It's the design of any compiler parser. The parser can only make so many assumptions, and one of them is that a number (e.g. 3) is a numeric. It's not the declaration that's a problem (e.g. integer 3things) so much, but the use of the numerics (e.g. total = subtotal + 3things;). The compiler doesn't know if you made a mistake with the "3", even though you declared it earlier.
Question
It arises as the separation problem for solving a network design problem which I am trying to solve by branch and cut.
Question
I want to use C programming language to call and control the running of arena simulation software in solving call center issue.
Can you be more specific. Are you trying to link input/output files to your arena simulation?
Question
Usually we generate path from the source code using flow graph and use this path for unit/integration testing. In white box testing, there are some coverage criteria like branch coverage, condition coverage, MC/DC etc. that the path should cover. Most of the testing technique use this kind of coverage criteria. Suppose we find the following paths from a dummy source code:
1. 1-2-3-4-5-10
2. 1-2-3-4-6-10
3. 1-2-3-10
4. 1-2-7-10
5. 1-2-7-8-10
6. 1-2-7-8-9-10
My question is how to generate this path automatically? is there any tool support for that?
You can save that information to a csv file, Than use C/C++ file command to write a program, It is very easy, all about csv file c/c++ source code can be found in the web.
Question
I want to create one text file which contains all node information like neighbour node, distance of node with each other ,mobility of node, speed, delay, then I want to do some coding in c language which convert this text file into tcl script-
PHP perl python ?
Question
Often when I use string in my program it shows no error during compilation but at runtime it shows segmentation fault (core dumped). When there is still no error why doesn't it work? (I do it in linux environment (ubuntoo))
A common cause of segmentation fault is when your program tries to store data to unallocated memory or call functions with incorrect declaration. Some helpful steps: 1) Dynamically allocate/deallocate memory. 2) Initialize arrays 3) Compile with more debug options.
Question
How to print variable column wise for different constant value of independent variable? I am printing the temperature as a function of different velocity. See the figure which shows the format. I want to print all these data into a single file generated by C program.
Dear Paresh - the question is not really comprehensible as a matter of English. Please rephrase. When you say "how to print variable column-wise", for example, do you really mean "how do I print several columns of DATA"?
Well, that's just "how do I print anything in C". for (i=0;i<nrows;i++) printrow(i).
And it looks like your function printrow(int i) should have body printf("%f\t%f\t%f\t%f\n", data[i][0], data[i][1],...); where data is some 2-D array of floating point values.
What is the question REALLY about? Is there a problem you have encountered doing things this way? Are some of the columns misaligned because of short or long data entries? You can specify exactly how much space each float takes on the page using the fine control available via formatting in printf. Are you perhaps saying that you are a novice in C and need a pointer to how to print stuff out? If so, please ask for a reference to a good text for you (no.. please let's not have the perennial debate about which is best! Argh).
Question
While opening file descriptors the read or write permission that we set in the open () system call is not making any difference. Even if I give a read only permission I am still able to write to that file descriptor. Can some one explain this phenomenon
This is my open call "fd=open(my_dev,O_RDONLY,0);"
Question
I want to plot data for a two-factorial experiment in a simple boxplot. The attached diagram shows an example. The grouping structure of the second factor should be indicated by horizontal lines under the axis labels (red lines in the example).
I can solve the problem as shown in the code snipped below. But there is a problem:
When the plot is (vertically) resized, the red lines are not anymore drawn at the desired vertical position. I am looking for a solution for this problem. The text can be placed at a particular "line" in the margin. Isn't there such a possibility to draw lines at a particular "line" instead of giving the user coordinates?
Code snipped used to generate the plot including the red lines:
# generate some values of a two-factorial experiment
values = rnorm(6*10)
lvl1 = c("one","two","three")
lvl2 = c("control","treated")
factor1 = rep(gl(3,10,labels=lvl1),2)
factor2 = gl(2,30,labels=lvl2)
plotgrp = factor(paste(factor1, factor2), levels=c(sapply(lvl1,paste,lvl2)))
# boxplot
at=c(1:3,5:7)
boxplot(values~plotgrp,at=at,xaxt="n",xlab="",las=2,ylab="",main="Example")
axis(1,at=at,labels=rep(lvl1,2),tick=FALSE)
mtext(lvl2,1,line=3,at=c(2,6))
cxy = par("cxy")
d = 3*cxy[1]
ypos = par("usr")[3] - 2.75*cxy[2]
segments(c(1,5)-d,c(ypos,ypos),c(3,7)+d,c(ypos,ypos),xpd=NA,lwd=2,col="red")
I found drawing into the margins troublesome, too. For a solution that works under arbitrary rescaling from the R Mac GUI, I only found solutions to work that allow the specification of positions in line coordinates but not in user coordinates. For your particular case, I can merely offer an ad hoc hack based on an abuse of additional axes. Try plotting your lines with these two commands:
axis(1,at=c(0.5,1,2,3,3.5),col="red",line=2.5,tick=T,labels=rep("",5),lwd=2,lwd.ticks=0)
axis(1,at=4+c(0.5,1,2,3,3.5),col="red",line=2.5,tick=T,labels=rep("",5),lwd=2,lwd.ticks=0)
Question
Assuming you want to develop an application and you want a session of the code to be in java and another in C. How do you do that?
Java Native Interface (JNI) or Java Native Access (JNA) can be used for this purpose.
Question
I have implemented a packet broadcasting application in C, but while I am running the program they print the error: bind: Address already in use.
I understand you use 2 applications, the client and the server. The server must listen on a port e.g. 5000, and the client must open a port by default (use 0) but send on port 5000.
Question
SupposeIi have a server program which is written in java, and the client program in C. How can I establish a communication link between them using TCP connection?
The language in which these programs are written is not relevant, so long as they offer a means to utilize a socket library (and both Java and C do, of course). The standard approach would be for the server to run a thread that opens a TCP socket in listening mode on a specific port number, waits for incoming connections, and then spawns a thread when a remote process connects, even as the listening thread continues to wait for additional incoming connections. (If you only expect one client to connect at any given time, this thread business is not needed.) The client program opens a TCP socket, connects to the server's IP address and port number, and start communicating.
How the two communicate is up to you. Presumably, you'll want to design a command protocol or borrow/extend an existing protocol. If possible, I prefer to use a human-readable protocol, as it makes it easier to test the server by simply connecting to it using telnet and issuing commands manually. One thing you might want to pay special attention to is security (both to authenticate clients to prevent unauthorized access, and to vet anything received from a malfunctioning or hacked client). Another thing to pay attention to is to make sure the server detects "hung" clients (e.g., TCP connection issues) and frees up resources appropriately.
Other than that, there are plenty of examples on the Web, e.g., Google "java tcp server code". And as I said above, you can mix-and-match a Java server with a C client, so long as they both speak the same command protocol (i.e., the commands sent by the client are consistent with what the server expects.)
Question
I am trying to find a code, or a tutorial with information on how to implement a fuzzy constraint solver in c/c++. I have found various scientific articles with methodologies to solve fuzzy constraint problems, some of which claim to be even faster than some older solvers (like con'flex). Con'flex could be a solution, but is no longer supported and all its documentation is in French.
Maybe you can develop a fuzzy constraint solver in C/C++ with reference to open-source FuzzyCLIPS developed by C.
FuzzyCLIPS is a fuzzy logic extension of the CLIPS (C Language Integrated Production System) expert system shell from NASA.
I used FuzzyCLIPS for a satellite-related project, but corresponding technical report was not published. The technical report of another satellite-related project can be downloaded from the following RG link. This paper (published by IEEE RWS) solved a constrained optimization problem, a approach in parallel with fuzzy constraint solver.
Y. Hong, A. Srinivasan, B. Cheng, L. Hartman, and P. Andreadis, "Optimal Power Allocation for Multiple Beam Satellite Systems," Proceedings of IEEE Radio and Wireless Symposium (IEEE RWS), Orlando, FL, January 2008, pp. 823-826. Available in the following RG link.
Question
I own a PIC18F4550 and I need to make him a slave in the SPI model, but I'm not understanding the SSPBUF register. According to the datasheet, it is a Serial Receive / Transmit Buffer Register (SSPBUF). But how do I know he's with the information or sending information through it? I do programming in C ont the MPLAB x.
Hi Odivio,
I've never worked with this kind of registers, but after a brief overview of the datasheet, I think it's allways shifting data into (SDI) and out (SDO) of the Shift Register (SSPSR) and you can transfer data from SSPSR to SSPBUF (for reading operation) whenever you want (count clock ticks to not re-load any bit of data), or transfer from SSPBUF to SSPSR (for writing operation).
- Since it is always shifting data in and out, when Master sends "relevant" data, it is also shifting in (both through SSPSR) but "your software" has to ignore input (SDI).
- Slave has to wait until all those 8-bits are shifted in (SDI) before to transfer then to SSPBUF, it will be also shifting data out through SDO but Master won't care at this moment.
Configure SDI, SDO, SCK, clear picture on "Figure 19-2: SPI MASTER/SLAVE CONNECTION" and interesting data around page 200.
Also check that interrupts and status bits have to be appropriately set to allow you to know by these flags when a full new byte has been shifted in/out, etc...
Question
I am using Matlab for my image processing research. In order to speed up my programs i am searching for a scientific numerical library C/C++. So what is the best choice?
In my opinion, the best library depends upon several factors:
1. The specific problem you have to face (e.g. linear algebra, non-linear systems, ordinary differential equations, partial differential equations, etc.)
2. How much the speed is important for your calculation.
3. How much is important to have a user-friendly interface.
4. Serial vs Shared memory vs Distributed memory
Unfortunately points 2 and 3 usually cannot be achieved at the same time: in general very user-friendly libraries tend to be a little bit slower.
In the following you can find my suggestions.
1. Vector/Matrix and linear algebra
I think that the Eigen (http://eigen.tuxfamily.org/index.php?title=Main_Page) and Blaze (https://code.google.com/p/blaze-lib/) libraries are very nice: they exploit the OO nature of C++ and therefore they are very easy to use. At the same time they are very fast. In particular, Eigen has also the possibility to solve linear systems using several types of decompositions and includes several useful classes and functions for image manipulation.
The Intel MKL libraries (which are the BLAS/LAPACK implementation provided by Intel) remain the best choice if you are mainly interested in speed and you are using Intel processors. Unfortunately they are not C++ and they are quite complex to use and the interface is not so easy. Please consider that they are written in FORTRAN but a C interface is provided.
You can have a more complete list of available libraries at the following address:
2. Ordinary differential equations
There are several solvers for ordinary differential equations, but most of them are in FORTRAN (VODE, LSODE, RADAU, DASPK, etc.) or in C (CVODE). The only real C++ solver, at least to my knowledge, is ODEINT (http://headmyshoulder.github.io/odeint-v2/). I tried this library only for non-stiff problems and works very well, it is easy to use and very fast. A very good alternative (not completely C++) is the BzzMath library (http://homes.chem.polimi.it/gbuzzi/), very powerful for stiff-problems.
3. Large problems (requiring shared or distributed memory)
The best option, which is full C++, is Trilinos (http://trilinos.sandia.gov/). It provides a lot of classes and functions to manage vectors and matrices in parallel, to solve linear and non-linear systems, to solve ordinary differential equations and calculate eigenvalues, etc. The library is enormous and quite complex to use and can be used also in serial. A simpler alternative (which is simple C) is PETSc (http://www.mcs.anl.gov/petsc/).
Question
How to implement an orthogonal sequence (for example Zadoff-Chu) using C/C++?
If the issue is working with complex types, C++98 has std::complex and C99 has complex.h.
Question
I want to extract some information from C code like: list of variable/function or global variable, definition-use of variable. Currently i convert c code to xml and parse xml file to collect this information. but it is really difficult process. So is there any parser/tool for that?
C parsers are really ten a penny. One comes with antlr, one with haskell, ...The only problem is which C: there are different kinds of C, in that each has its own extensions, often undocumented. Gnu C, for example, permits interior functions defined within functions and statements that return a value and which can be used as expressions by enclosing them in special brackets. Then there's all that __attribute__ stuff on top of all the other extensions.
Most parsers are based one way or another on a free yacc grammar first published by Jutta Degener in 1995. Google for it and you'll come up with it at once.
Parsing has been the routine first step in all language building since year dot! Nevertheless, it's difficult to get right for C, somewhat because "right" is a meaningless concept given the multiplicity of extensions and subsets supported by different compilers. You need a specific target.
The technique most people use nowadays is to get the compiler for the C that you are interested in to convert it to some standardized form itself. It sounds to me like that is what you are doing. Stealing gcc's parser is a standard way to go (but gcc claims it does not yet support absolutely all of C99, if anyone cares). If you run gcc with -da you get out (in separate files) representations of all the abstract syntax trees that gcc itself uses in its internal pipeline, one after the other, starting with the initial abstract tree. The AST representation format is called RTL - it's well known (and rebarbative). Stealing gcc is the technique used in coverity, for example.
If one's sensible, one tries to get a "minimal c" source out first. There is a minimal standard C that is used in most static analysis (hmm, and I forget the accepted name for it... Cil?). Clang uses it as I recall, for its static analyzer. Oh yes, it's all based on an abstract machine model, the LLVM. Clang will translate quite a few langages, including C, into "minimal C" that runs on the LLVM. That is ultra useful.
It appears to me that you are trying to construct a "data dictionary".
You could do worse than simply pick up one of the standard C static analyzers. CLang, Eclair, .. look them up on wikipendia:
Clang is kind of all the nuts and bolts, but you may find it most usable.
Going for minimal, you may like Sparse (statc analyzer), written by Linus, which contains a quick and dirty C parse (parser.c) that it looks like he wrote directly in C out of the top of his head, without using any formal grammar. That's fun.
Question
My compression algorithm majorly depends on opening a file in 0 and 1's i.e Binary and applying a compression algorithm to it . Can anyone help me ?
huffman coding is one of the best methods for compressing a file. What you basically do is to check occurence frequencies of letters, groups of letters, or words (whatever your level of requirement is) and assign a number to it. For example the most frequent element will be assigned to the number that can be representable with the least number of bits and the least freuqent the most. However huffman requires the mapping table stored as well. Gaussian encoding is another compression scheme that does not require global knowledge.
Also you may have other requirements, such as being able to uncompress a substring without uncompressing everything. So, theres a lot for compression actually
The book "Managing Gigabytes" is an excellent resource for compression, gives a comprehensive explanation to all these algorithms if you have access to it
Question
I want to implement "Manolis I.A. Lourakis and Antonis A. Argyros" Sparse Bundle Adjustment in C++. Anyone Kindly guide to step by step by implementation.
Question
.
This looks like an evil example straight from the 2nd edition of Kernighan and Ritchie (The C Programming Language), p. 122 (section 5.12: Complicated Declarations), where it is described as a function returning a pointer to an array of pointers to functions returning char. Here is a usage example:
#include <stdio.h>
char x1() { return 'a'; } // Function returning a char
char (*x2[])() = {&x1}; // Array of pointers to functions returning char
char (*(*x())[])() { return &x2; } // Function returning a pointer to the above
void main()
{
char (*x3)() = **x(); // Pointer to a function returning char
printf("This is the value: %c\n", x3());
}
Alternatively, you could skip declaring x3 altogether and just write
printf("This is the value: %c\n", ((*(x()))[0])());
Question
Hi,
I know that FORTAN is fast, whereas one of disadvantages of MATLAB is the problem of speed. Matlab code is interpreted during runtime. Languages like C and Fortran are faster because they are first compiled into the computer's native language.
How can I develop a fast MATLAB code?
Thanks for taking a look.
One basic approach to writing fast MATLAB (or R) code is to understand vectorized operations. Take a look at this PDF, which addresses this particular problem and I guess will be incredibly helpful:
Best,
José.
Question
Traditionally, programs can be compared using syntax trees. Is there any alternative method of comparing them ? If I use a method such as TD-IDF, it just compares the symbols or the keywords (e.g. keyword appears 2 times) but not their relations (e.g. nested for).
You could certainly analyze them using call graph tools such as CodeViz or Egypt and maybe utilize some of the functionality of reverse engineering tools such as IDA Pro. Or if you don't care about the implementation you can write unit tests or run the programs through fuzzers and compare the output.
As far as mathematically proving two programs are equivalent, that's more difficult and I believe there is still research being done towards it. I had some friends doing behavioral analysis as an alternative to signitature-based virus detection, but those tools are expensive and complicated. Regardless, I think you would do best searching for methods to compare programs in the security field as that's probably where the most industry-based analysis has been done.
Question
Why data flow testing is needed?
where we can implement this technique?
Mostly for performance, for chance of error reduction, for memory use optimization, for code readability (because the less you throw one piece of data around the program, the simpler it is for someone else to later review your code and figure out what is happening and where data is coming from/going to).
As for where it can be implemented. Anywhere where you "care to care" for resource consumption. For example, embedded devices, mobile platforms (although this is becoming less and less true for newer mobile devices), in industrial machine programming, for automobile computer system software development (very important to reduce the chance for data becoming corrupt, lost, somehow modified, or outdated while "flowing" to its final destination - this is in case of asynchronous and multi-threading systems).
Actually, everywhere if you are serious about product quality.
Question
The object oriented programming language C++ may be considered a superset of C. But in numerical computations, what are the advantages (or computational features) of C++ over C?
This C versus C++ issue is an old one, and as we see here, there is a diversity of opinion. Let me add to that diversity!
Firstly, it can hardly be doubted that C++ leads to good code and the now-part-of-the-standard Boost libraries are a great enhancement. It's great for larger projects because of its strictness and syntax: a group of individuals can write modular code and bring it all together with little danger of doing bad things.
Having said that, you do not need to use all that C++ offers: operator overloading, templates and lambda expressions are an example of things that are very good when needed, but you do not have to use them. Likewise (and here comes a heresy) your C++ classes do not have to have private members - everything can be public and then we are back to good old C-structures. C++ is in many ways (but certainly not all) back-compatible with C.
Confession: I hate it when my students give me technically correct yet obscure code that will not be understood by others who are less diligent (geeky). That effectively renders the code non-portable and difficult to maintain once the author is gone.
Secondly comes the issue of compilers and their performance. I generally find that the Intel compilers (commercial) have excellent optimization (and good profilers) so I can squeeze out 10%-15% more performance when needed. For large codes like gas codes or real time image processing that can make a difference. However, there is the issue of the user interface, and for that it is hard to beat the (commercial) Visual Studio compilers, which restrict you to using Windoze. GCC is pretty good these days, the debugger has improved though the profiler is a bit behind. But for trivial stuff that's not necessary.
Thirdly is the issue, if you are a student, of what you are going to do with your knowledge of C or C++. The job market for C is far smaller than for C++ (but still bigger than Fortran I guess?). People have made statements like "C is good for small quick jobs". I agree, but why not use Python? Totally cross platform and lots of jobs requiring Python out there. I even run the management of my C++ code from Python.
Question
It may help in my research code
This article introduces some basic methods in Java for matrix additions, multiplications, inverse, transpose, and other relevant operations. The matrix operations are explained briefly .
The main functions are given as static utility methods. All methods in this article are unit tested and the test codes are part of the attached files.
code enclosed
Ex.i get 5gb pen drive. And ready the pen drive by future but can you use your computer and you never think not possible.
Question
i have a explanation but I want your think.
Question
Dear all,
I want to do a Guassian td opt of a Pt complex(input file is followed),in the output files it says:
Leave Link  107 at Sat Sep 20 17:12:14 2014, MaxMem=   33554432 cpu:       1.0
(Enter D:\G09W\l101.exe)
Symbolic Z-matrix:
End of file in GetChg.
Error termination via Lnk1e in D:\G09W\l101.exe at Sat Sep 20 17:12:14 2014.
Job cpu time:  0 days  1 hours 53 minutes 16.0 seconds.
File lengths (MBytes):  RWF=    247 Int=      0 D2E=      0 Chk=     25 Scr=      1
.
Could someone give some advises kindly?
Sincerely,
%chk=Pt1a_small.chk
#p opt td b3lyp/genecp geom=connectivity fopt=maxcycles=1024
scf=(conver=7,maxcycles=2048)
Title Card Required
0 1
Pt                 0.35598703    3.41423943    0.00000000
N                 -0.02801297    2.15923943   -1.46300000
C                  0.74998703    4.64123943    1.49500000
N                 -1.64601297    3.65923943   -0.12600000
N                  2.23298703    2.75723943   -0.36900000
C                  1.00998703    1.53423943   -2.04600000
C                 -1.30201297    2.03323943   -1.87500000
C                 -2.23501297    2.85523943   -1.08300000
C                  2.29398703    1.81723943   -1.37400000
C                 -2.41001297    4.49923943    0.59500000
C                  3.36698703    3.11523943    0.26100000
C                  1.01998703    5.38923943    2.40500000
C                 -1.57901297    1.23723943   -2.97400000
C                  0.79098703    0.75723943   -3.17700000
C                 -3.60601297    2.86223943   -1.26000000
C                  3.47998703    1.17523943   -1.68700000
C                 -3.79101297    4.55523943    0.43100000
C                  4.58298703    2.50723943   -0.03100000
C                  1.34198703    6.28123943    3.49000000
C                 -0.52101297    0.62023943   -3.63300000
C                 -4.39601297    3.70623943   -0.48900000
C                  4.63498703    1.50923943   -0.98900000
H                 -1.94404745    5.14103801    1.31324140
H                  3.33214835    3.88507273    1.00332268
H                 -2.58587089    1.09982962   -3.30904579
H                  1.60332734    0.27858586   -3.68285964
H                 -4.05701208    2.22039827   -1.98769480
H                  3.50730259    0.43176154   -2.45602177
H                 -4.37843578    5.24152145    1.00445570
H                  5.47261955    2.80876993    0.48138007
H                  1.09279944    6.01265143    4.49531887
H                  1.83026716    7.21215835    3.29031908
H                 -0.71615804    0.03058101   -4.50427570
H                 -5.45999759    3.70281849   -0.60224765
H                  5.55445087    1.00028440   -1.19012437
1 4 1.0 2 1.0 5 1.0 3 1.0
2 6 1.5 7 1.5
3 12 3.0
4 8 1.5 10 1.5
5 9 1.5 11 1.5
6 9 1.0 14 1.5
7 8 1.0 13 2.0
8 15 2.0
9 16 2.0
10 17 1.5 23 1.0
11 18 1.5 24 1.0
12 19 1.5
13 20 1.5 25 1.0
14 20 1.5 26 1.0
15 21 1.5 27 1.0
16 22 1.5 28 1.0
17 21 1.5 29 1.0
18 22 2.0 30 1.0
19 31 1.0 32 1.0
20 33 1.0
21 34 1.0
22 35 1.0
23
24
25
26
27
28
29
30
31
32
33
34
35
-C -N -H -O 0
6-31G**
****
-Pt 0
LANL2DZ
****
-Pt 0
LANL2DZ