Tuesday, May 25, 2010

System call Internals

Try this link

Sytem Call Internals

Linux Linkers and Loaders

http://www.linuxjournal.com/article/6463

Creating a package using autoconf and automake

Please do refer this directly

http://mij.oltrelinux.com/devel/autoconf-automake/

In case this link does NOT turn up, use the following info
A tutorial for porting to autoconf & automake

l.u. 21/11/2005

A first disclaimer is that I don't really like autoconf and automake. This is not the place for longer dissertations, so I won't spend more words on this. However, it is a matter of facts that many users just like to fetch your application, and issue the usual ./configure && make && make install right ahead.
So, this is a synthetic tutorial for moving a Makefile-based program to an autohell- (this is a popular way to refer to { autoconf, automake, libtool } that I will encourage) enabled package.

A somewhat complete, likely example is given here. If your PRE is not exactly like the one proposed, you just probably won't need to perform the corresponding following steps.

Definition of the problem:

PRE: you have a tree with

* sources in src/
* documentation in doc/
* man pages in man/
* some scripts in scripts/ (in general, stuff to be installed but not compiled)
* examples in examples/

POST: you want to

* check for the availability of the needed headers/libraries
* possibly adjust some things (say some path in scripts, or in docimentation) at compile-time
* install everything in its adequate place

So, this is what to do for moving with the very minimum effort:

1. Cleaning up
Move away every possible Makefile you have in the package (rename it for now)
2. Generating configure.ac
Run autoscan:

$ autoscan

autoscan tries to produce a suitable configure.ac file (autoconf's driver) by performing simple analyses on the files in the package. This is enough for the moment (many people are just happy with it as permanent). Autoscan actually produces a configure.scan file, so let it have the name autoconf will look for:

$ mv configure.scan configure.ac

-- note: configure.in was the name used for autoconf files, now deprecated.
3. Adjusting things
Adjust the few things left to you by autoscan: open configure.ac with your favourite editor

$ vim configure.ac

look in the very first lines for the following:

AC_INIT(FULL-PACKAGE-NAME, VERSION, BUG-REPORT-ADDRESS)

and replace with your stuff, e.g.:

AC_INIT(pippo, 2.6, paperino@staff.pippo.org)

4. Generating a first configure script
At this point, you're ready to make autoconf produce the configure script:

$ autoconf

This produces two files: autom4te.cache and configure. The first one is a directory used for speeding up the job of autohell tools, and may be removed when releasing the package. The latter is the shell script called by final users.
In this status, what the configure script does is just checking for requirements as suggested by autoscan, so nothing very conclusive yet.
5. Generating suitable Makefiles
We have the system checking part. We now want the building and installing part. This is given by a cooperation of automake and autoconf. Automake generates some "templates" that autoconf-generated scripts will traduce into actual Makefile. A first, "main" automake file is needed in the root of the package:

$ vim Makefile.am

list the subdirectories where work is needed:

AUTOMAKE_OPTIONS = foreign
SUBDIRS = src doc examples man scripts

the first line sets the mode automake will behave like. "foreign" means not GNU, and is common for avoiding boring messages about files organized differently from what gnu expects.
The second line shows a list of subdirectories to descend for further work. The first one has stuff to compile, while the rest just needs installing, but we don't care in this file. We now prepare the Makefile.am file for each of these directories. Automake will step into each of them and produce the corresponding Makefile.in file. Those .in files will be used by autoconf scripts to produce the final Makefiles.
Edit src/Makefile.am:

$ vim src/Makefile.am

and insert:

# what flags you want to pass to the C compiler & linker
CFLAGS = --pedantic -Wall -std=c99 -O2
LDFLAGS =

# this lists the binaries to produce, the (non-PHONY, binary) targets in
# the previous manual Makefile
bin_PROGRAMS = targetbinary1 targetbinary2 [...] targetbinaryN
targetbinary1_SOURCES = targetbinary1.c myheader.h [...]
targetbinary2_SOURCES = targetbinary2.c
.
.
targetbinaryN_SOURCES = targetbinaryN.c

This was the most difficult one. In general, the uppercase, suffix part like "_PROGRAMS" is called primary and tells partially what to perform on the argument; the lowecase, prefix (it's not given a name) tells the directory where to install.
E.g.:

bin_PROGRAMS

installs binaries in $(PREFIX)/bin , and

sbin_PROGRAMS

installs in $(PREFIX)/sbin . More primaries will appear in the following, and here is a complete list of primaries. Not all can be prefixed to such primaries (see later for how to work around this problem).

Let us now move to mans:

$ vim man/Makefile.am

insert the following in it:

man_MANS = firstman.1 secondman.8 thirdman.3 [...]

yes, automake will deduce by itself what's needed for installing from this. Now edit for scripts:

$ vim scripts/Makefile.am

insert:

bin_SCRIPTS = script1.sh script2.sh [...]

The primary "SCRIPTS" instruct makefiles to just install the arguments, without compiling of course.

So far so good. Two jobs remain to define: installing examples and installing plain docs. This is the nasty part, as automake doesn't handle primaries for installing in the usual $(PREFIX)/share/doc/pippo . The workaround is to specify a further variable and using it as prefix:

$ vim doc/Makefile.am

docdir = $(datadir)/doc/@PACKAGE@
doc_DATA = README DONTS

if "abc" is wanted for prefix, "abcdir" is to be specified. E.g. the code above expands to /usr/local/share/doc/pippo ("@PACKAGE@" will be expanded by autoconf when producing the final Makefile, see below). $(datadir) is known by all configure scripts it generates. You may look for the list of directory variables.

Similarly for examples, but we want to install in $(PREFIX)/share/examples/pippo , so:

$ vim examples/Makefile.am

exampledir = $(datarootdir)/doc/@PACKAGE@
example_DATA = sample1.dat sample2.dat [...]

All these Makefile.am files now exist, but autoconf has now to be told about them.
6. Integrating the checking (autoconf) part and the building (automake) part
We insert now some macros in configure.ac for telling autoconf that the final Makefiles have to be produced after ./configure :

$ vim configure.ac

right after AC_INIT(), let initialize automake:

AM_INIT_AUTOMAKE(pippo, 2.6)

then, let autoconf generate a configure script that will output Makefiles for all of the above directories:

AC_OUTPUT(Makefile src/Makefile doc/Makefile examples/Makefile man/Makefile scripts/Makefile)

7. Making tools output the configure script and Makefile templates
we have now complete instructions for generating the famous configure script run by the users when installing, that both checks for building/running requirements and generates Makefiles for actually building and installing everything in place. Let now actually make tools generate such script:

$ aclocal

This generates a file aclocal.m4 that contains macros for automake things, e.g. AM_INIT_AUTOMAKE.

$ automake --add-missing

Automake now reads configure.ac and the top-level Makefile.am, interprets them (e.g. see further work has to be done in some subdirectories) and, for each Makefile.am produces a Makefile.in. The argument --add-missing tells automake to provide default scripts for reporting errors, installing etc, so it can be omitted in the next runs.
Finally, let autoconf build the configure script:

$ autoconf

This produces the final, full-featured configure shell script.
8. Further customizations
if you need to perform custom checks, or actions in configure, just write the (shell) code somewhere in configure.ac (before OUTPUT commands), then run autoconf again. For some checks, autoconf may already provide some macro: look in the list of autoconf macros before writing useless code.

How do things work from now on
The user first runs:

$ ./configure

The shell script just generated will:

1. scan for dependencies on the basis of the AC_* macros instructed in configure.ac. If there's something wrong/missing in the system, an opportune error message will be dumped.
2. for each Makefile requested in AC_OUTPUT(), translate the Makefile.in template for generating the final Makefile. The main makefile will provide the most common targets like install, clean, distclean, uninstall et al.

if configure succeeds, all the Makefile files are available. The user then issues:

$ make

The target all from the main Makefile will be worked. This target expands into all the hidden targets to first build what you requested. Then, by mean of

# make install

everything is installed.

Monday, May 24, 2010

What is PORTABLE and UN-PORTABLE CODE

Unportable Code:
implementation-defined— The compiler-writer chooses what happens, and has to document it.
Example: whether the sign bit is propagated, when shifting an int right.
unspecified— The behavior for something correct, on which the standard does not impose any requirements.
Example: the order of argument evaluation.
Bad Code:
undefined— The behavior for something incorrect, on which the standard does not impose any requirements. Anything is allowed to happen, from nothing, to a warning message to program termination, to CPU meltdown, to launching nuclear missiles (assuming you have the correct hardware option installed).
Example: what happens when a signed integer overflows.
a constraint— This is a restriction or requirement that must be obeyed. If you don't, your program behavior becomes undefined in the sense above. Now here's an amazing thing: it's easy to tell if something is a constraint or not, because each topic in the standard has a subparagraph labelled "Constraints" that lists them all. Now here's an even more amazing thing: the standard specifies [5] that compilers only have to produce error messages for violations of syntax and constraints! This means that any semantic rule that's not in a constraints subsection can be broken, and since the behavior is undefined, the compiler is free to do anything and doesn't even have to warn you about it!

Example: the operands of the % operator must have integral type. So using a non-integral type with % must cause a diagnostic.
Example of a rule that is not a constraint: all identifiers declared in the C standard header files are reserved for the implementation, so you may not declare a function called malloc() because a standard header file already has a function of that name. But since this is not a constraint, the rule can be broken, and the compiler doesn't have to warn you

Portable Code:

strictly-conforming— A strictly-conforming program is one that:
• only uses specified features.
• doesn't exceed any implementation-defined limit.
• has no output that depends on implementation-defined, unspecified, or undefined features.

This was intended to describe maximally portable programs, which will always produce the identical output whatever they are run on. In fact, it is not a very interesting class because it is so small compared to the universe of conforming programs. For example, the following program is not strictly conforming:

#include
#include
int main() { (void) printf("biggest int is %d", INT_MAX);
return 0;}
/* not strictly conforming: implementation-defined output! */


conforming— A conforming program can depend on the nonportable features of an implementation. So a program is conforming with respect to a specific implementation, and the same program may be nonconforming using a different compiler. It can have extensions, but not extensions that alter the
behavior of a strictly-conforming program. This rule is not a constraint, however, so don't expect the compiler to warn you about violations that render your program nonconforming!
The program example above is conforming.

Thursday, May 13, 2010

Grep Tips

Searching Files on UNIX
On MPE you can display files using the :Print command, Fcopy, Magnet, or Qedit (with pattern match searches). On HP-UX you can display files using cat and even better using more (and string search using the slash "/" command), and Qedit (including searches of $Include files, and so on), but if you really want to search for patterns of text like a UNIX guru, grep is the tool for you.
Text version.

cat report.c {prints file on stdout, no pauses}
cat -v -e -t dump {show non-printing characters too}
cat >newfile {reads from stdin, writes to 'newfile'}
cat rpt1.c inp.c test.s >newfile {combine 3 files into 1}
more report.c {space for next page, q to quit}
ps -a | more {page through the full output of ps}
grep smug *.txt {search *.txt files for 'smug'}


MPE users will take a while to remember that more, like most UNIX tools, responds to a Return by printing the next line, not the next screen. Use the Spacebar to print the next page. Type "q" to quit. To scan ahead to find a string pattern, type "/" and enter a regular expression to match. For further help, type "h".

Searching Files Using UNIX grep
The grep program is a standard UNIX utility that searches through a set of files for an arbitrary text pattern, specified through a regular expression. Also check the man pages as well for egrep and fgrep. The MPE equivalents are MPEX and Magnet, both third-party products. By default, grep is case-sensitive (use -i to ignore case). By default, grep ignores the context of a string (use -w to match words only). By default, grep shows the lines that match (use -v to show those that don't match).
Text version.

% grep BOB tmpfile {search 'tmpfile' for 'BOB' anywhere in a line}
% grep -i -w blkptr * {search files in CWD for word blkptr, any case}
% grep run[- ]time *.txt {find 'run time' or 'run-time' in all txt files}
% who | grep root {pipe who to grep, look for root}



Understanding Regular Expressions
Regular Expressions are a feature of UNIX. They describe a pattern to match, a sequence of characters, not words, within a line of text. Here is a quick summary of the special characters used in the grep tool and their meaning:
Text version.

^ (Caret) = match expression at the start of a line, as in ^A.
$ (Question) = match expression at the end of a line, as in A$.
\ (Back Slash) = turn off the special meaning of the next character, as in \^.
[ ] (Brackets) = match any one of the enclosed characters, as in [aeiou]. Use Hyphen "-" for a range, as in [0-9].
[^ ] = match any one character except those enclosed in [ ], as in [^0-9].
. (Period) = match a single character of any value, except end of line.
* (Asterisk) = match zero or more of the preceding character or expression.
\{x,y\} = match x to y occurrences of the preceding.
\{x\} = match exactly x occurrences of the preceding.
\{x,\} = match x or more occurrences of the preceding.


As an MPE user, you may find regular expressions difficult to use at first. Please persevere, because they are used in many UNIX tools, from more to perl. Unfortunately, some tools use simple regular expressions and others use extended regular expressions and some extended features have been merged into simple tools, so that it looks as if every tool has its own syntax. Not only that, regular expressions use the same characters as shell wildcarding, but they are not used in exactly the same way. What do you expect of an operating system built by graduate students?

Since you usually type regular expressions within shell commands, it is good practice to enclose the regular expression in single quotes (') to stop the shell from expanding it before passing the argument to your search tool. Here are some examples using grep:

Text version.

grep smug files {search files for lines with 'smug'}
grep '^smug' files {'smug' at the start of a line}
grep 'smug$' files {'smug' at the end of a line}
grep '^smug$' files {lines containing only 'smug'}
grep '\^s' files {lines starting with '^s', "\" escapes the ^}
grep '[Ss]mug' files {search for 'Smug' or 'smug'}
grep 'B[oO][bB]' files {search for BOB, Bob, BOb or BoB }
grep '^$' files {search for blank lines}
grep '[0-9][0-9]' file {search for pairs of numeric digits}


Back Slash "\" is used to escape the next symbol, for example, turn off the special meaning that it has. To look for a Caret "^" at the start of a line, the expression is ^\^. Period "." matches any single character. So b.b will match "bob", "bib", "b-b", etc. Asterisk "*" does not mean the same thing in regular expressions as in wildcarding; it is a modifier that applies to the preceding single character, or expression such as [0-9]. An asterisk matches zero or more of what precedes it. Thus [A-Z]* matches any number of upper-case letters, including none, while [A-Z][A-Z]* matches one or more upper-case letters.

The vi editor uses \< \> to match characters at the beginning and/or end of a word boundary. A word boundary is either the edge of the line or any character except a letter, digit or underscore "_". To look for if, but skip stiff, the expression is \. For the same logic in grep, invoke it with the -w option. And remember that regular expressions are case-sensitive. If you don't care about the case, the expression to match "if" would be [Ii][Ff], where the characters in square brackets define a character set from which the pattern must match one character. Alternatively, you could also invoke grep with the -i option to ignore case.

Here are a few more examples of grep to show you what can be done:

Text version.

grep '^From: ' /usr/mail/$USER {list your mail}
grep '[a-zA-Z]' {any line with at least one letter}
grep '[^a-zA-Z0-9] {anything not a letter or number}
grep '[0-9]\{3\}-[0-9]\{4\}' {999-9999, like phone numbers}
grep '^.$' {lines with exactly one character}
grep '"smug"' {'smug' within double quotes}
grep '"*smug"*' {'smug', with or without quotes}
grep '^\.' {any line that starts with a Period "."}
grep '^\.[a-z][a-z]' {line start with "." and 2 lc letters}