Showing posts with label Image Processing. Show all posts
Showing posts with label Image Processing. Show all posts

Monday, March 30, 2009

PPV maps details

Equations for the position to velocity (PPV) cubes were computed for a 3-D cube of density with its center (point P) at a given distance (d) from Sun (S). Here, by data convention, Y axis is along the line of site (SP).

For any pixel in the cube (point Q), the line of site SQ would then subtend an angle wrt the center (SP). Let Q' be the projection of Q on X-Y plane, therefore, SQ' has projections of x & y along the two axes.

We can then relate R0 (= CS = distance of Sun from Galaxy's center), R ( = distance CQ'), distance d (SP), and distance d' (SQ') through other quantities and angles (such as longitude= angle CSP).

The projection of relative velocity between S & Q' (due to galactic rotation) is added to the projections of the pixel velocities (vxx, vyy and vzz). Doing this for each pixel creates the cube "v_los". We sort the pixel values falling in different velocity bins, and make velocity maps of width 1 km/s.




Sunday, March 29, 2009

Simulations: PPV maps ready

  1. Testing with 10x10x20 cubes
  2. PPV maps seem to be all right.
  3. Will now test on the desktop with full limits put in.

Wednesday, March 25, 2009

MHD simulations

Miguel has asked for the latest in MHD simulation cubes. so, i have read data with Python, instead of C++. it is a lot faster to write the code and test it. it is also to process and update.

  1. Read the density array: split the lines in parts
  2. Store the data in density_data[] and reshape it.
  3. plot select slices along z, and they are okay.
  4. Now read density and velocities.

Monday, June 09, 2008

Interference Mitigation



Vaishnavi worked on Median-based interference identification and removal. The basic idea is to treat the time-freq map of a given baseline as a 2-D array. See the above diagram for such an example, where the red arrow indicates one interference location.

We then take a small section of this 2-D array (say 32x32 matrix), where the 3rd axis is the intensity (or amplitude). We compute the median and standard deviation of the section (32x32 matrix). We then put some criterion of (median+7*sigma) for genuine data.

Amplitude > (median + 7*sigma) is treated as interference. This appears to identify interference quite reasonably. Check the following image, where black pixel indicates interference. Compare that with the top image, where most of the interference is identified.

There is some data loss due to over-correction. Even so, total data flagged is about 15%, which is quite good.

Thursday, October 04, 2007

Fractals and image characterisation

Some links are in order

  1. Fractal Dimension: Wikipedia
  2. A course on Fractals in Yale U
  3. A course on Fractal dimension from images: Munich U
  4. Fractal Dimension explained

So, once you know about fractal dimensions, come to read the stuff on the right (Conti, 2001)


One can treat the image 3-D object. Compute the total number of occupied boxes in X-Y-I dimension box, as a function of size of the box. D = ln(number)/ ln(radius).

It is a little bit more complicated. Check the paper by Conci, a PPT talk can also be found.

Monday, October 01, 2007

How to distinguish between landscape and portrait pictures?

  1. Perhaps we can search for a large number of pixels with same natural colors: green, blue and black (shadows). look if a large fraction of pixels contain the same 'Hue' and 'Saturation'.
  2. Another try: Look at the Fourier spectra of images, and mark radii of 60%, 90%, 99%, 99.9% power. they should be distinct for landscape images and facial portraits or nearby objects.
  3. Human objects have a lot more symmetry than the natural objects. In fact, there could be some fractal pattern seen over the different length scales of an image of a natural scenery. Try to capture 'fractal' properties of pixels.
speaking of the last one: one could look at fractal dimension of a picture pixel values. How? Perhaps in the next blog post...

Friday, June 29, 2007

AIPS : initial calib idea

Initial Calibration steps in AIPS

  1. INDXR
  2. First clip source for arbitrarily high points (100 KJy), due to correlator errors. (CLIPM)

  3. SETJY to set fluxes of prim calib sources in SU table
  4. CALIB on prim calib to find antenna solutions, SN tab 1
  5. CALIB on second calib : SN tab 2
  6. GETJY to calculate second calib fluxes to second calib (using SN and SU tables)
  7. CLCAL apply second calib source calibration to target sources: CL tab 2


Now, one is free to excise interference. once one has cleaned all the data, return to step 4 above to redo the calibration process (after deleting all SN and CL tables generated above).

Friday, October 06, 2006

Images and Fourier Analysis

We want to teach students to understand the concept of Fourier spectrum and its usage in image processing using Octave. Though students know formulae, they have no physical understanding of Fourier domain and how to use coefficient to gain something useful from an image or signal. We have simple programs to clarify some fundamental ideas about Fourier domain.

I created programs for their use:
  1. Simple routines to create images with sinusoidal waves. They should understand the concept of 'spatial waves' and 'spatial frequencies'.

  2. Then students are asked to look at a simple circular step function in an image and its Fourier Transform (Sinc function plotted using SQRT). They are expected to vary the amplitude and size of the step function in the image and understand the correspondence with the Sinc function in the Fourier spectrum.

  3. Students are then given two Gaussians of different widths, with some separation. One can now use Fourier filters to smooth the image. As one throws away 'higher spatial frequencies', one would see some details lost. How much of smoothing leads to which details being lost?
  4. Finally, we use a textbook image to grasp the idea that different 'structures' in an image are nothing but grey-level variations of certain spatial sizes. These sizes correspond to 'spatial wavelengths'.