Laborotary Facilities

Developed and Developing Program

Major Developed Simulation Program:

  • (1D-DDCC) One Dimensional Poisson, Drift-diffsuion, and Schrodinger Solver: This solver Can solve the Poisson-Schrodinger equation, drift-diffusion equation, self-consistently. It can also calculate the tunneling current. The radiative and schockley Read Hall recombination is included into the program. The program can also handeling the generation of carrier which is cable for solving solar cell and PL problems. The AM1.5 spectrum is built in solver. The function is similar to PC1D

  • (2D-DDCC) Two Dimensional Poisson and Drif-diffsuion Solver: This solver Can solve the two dimensional Poisson-Schrodinger equation, drift-diffusion equation, self-consistently. This program can be used to solve the HFET, LED problem. The function is similar to APSYS or ATLAS

  • (3D-DDCC) Three Dimensional FEM Poisson and Drif-diffsuion Solver: Based on gmsh program and the finite element method developing in our laboratory. This solver Can solve the three dimensional Poisson-Schrodinger equation, drift-diffusion equation, self-consistently. This program can be used to solve the HFET, LED, nanowire transistor problem. The function is similar to APSYS

  • Monte Carlo program: Studying of carrier transport mechanism in the semiconductor material system. The following mechanisms are includeing in the model:

  • Ionized impurity scattering
  • Accoustic phonon scattering
  • Polar optical phonon scattering
  • Equivalent and non-equivalent intervalley and intra valley scattering
  • Alloy scattering
  • dislocation scattering
  • Grain boundary scattering
  • defect trapping scattering
  • Radiative Recombination
  • k.p program for dealing with quantum well and quantm dot system.

  • 3D thermal modeling program: Based on gmsh program and the finite element method developing in our laboratory, we can handeling with the heat dissipation problem including the packaging issues.

  • Valence force field model: use to calculate the strain distribution of semiconductor devices.

  • Q and A, Please go to our discuss board

  • Facilities
  • 4 x Asus 1642 Xeon 5345 with total 8 core + 24 GB memory (61 GFlops linpack)
  • 2 x IBM X3350 Xeon 5355 with total 8 core + 16 GB memory (69 GFlops linpack)
  • 2 x Supermicro Xeon 5430 with total 8 core + 64 GB memory (74 GFlops linpack)
  • 4 x Supermicro Xeon 5410 with total 8 core + 16 GB memory (66 Gflops linpack)
  • 1 x IBM X3650 Xeon 5530 with total 8 core+16 thread + 48 GB memory (150 Gflops linpack)
  • 1 x supermicro AMD 2431 with total 12 core x 48 GB memory (110 Gflops linpack)
  • 1 x supermicro AMD 6174 with total 48 core x 192 GB memory (460 Gflops linpack)
  • 4 x supermicro Xeon 5630 with total 8 core x 48 GB memory (120 Gflops linpack)
  • total 1810 Gflops
  • Tutorial to use Lab facilities

    Linux official website:  http://www.linux.org

    Latex official website:  http://www.latex-project.org/

    OpenOffice official website:  http://zh.openoffice.org/

    Subversion official website: http://subversion.tigris.org/

     

    Date Author Content Supplement
    2007/08/07 Tian-Li Yu Linux short tutorial (zip)
    2007/08/14 Yuh-Renn Wu Latex manuscripts and examples (zip)
    2007/08/21 Chen-Mou Cheng Subversion tutorial (ppt, pdf) Online Free O'Reilly Book
    2007/08/28 Tian-Li Yu C/C++ programming & make (zip)
    2007/09/04 Yuh-Renn Wu Matlab tutorial (zip)
    2007/09/11 Jason Chang Parallel & Distributed Computing (ppt)
     

    PBS tutorial
    Examples of PBS script for submitting jobs (job.sh)
    #!/bin/bash
    #PBS -S /bin/bash
    #PBS -N grid-pi
    #PBS -l nodes=1:ppn=1,mem=1gb,walltime=100:00:00,nice=15
    #PBS -M xxx@xxx.xxx.xx
    #PBS -q long
    #PBS -m abe
    #PBS -o /home/username/pbs_out
    #PBS -e /home/username/pbs_out
    cd /home/username/workingdirectory
    ./gridpi.exe > gridpi.info &
    ./a.out > a.out.txt &
    wait


    Note:
    /home/username/pbs_out is a directory, not a file, you need to create this directory

    "wait" command is very important, the program will wait until the the previous command is finished.

    xxx@xxx.xxx.xx is your emal address

    ppn=1 ppn: How many CPUs you need per node. If you are running a serious job, ppn=1. If you are running a parallel job, 1-8 are suggested values. You may not be able to find a workstation with ppn larger than 8. Only a few nodes has cpus larger than 8.

    mem=1gb : How many memory you estimate your program will use. If you know your program needs memory larger than 1GB. You have better to request the memory you need. Or your program might crash due to insufficient memory.

    -l nodes=1 : node number. The default is 1 except that you are running a MPI program.

    Submitting jobs
    qsub jobs.sh
    Query job status
    qstat
    qstat -n
    qstat -f
    qstat -q
    show all jobs running
    showq
    delete the job
    qdel (jobid)

    BASH Script Example
    Examples of BASH SCRIPT (submit.sh)
    #!/bin/bash
    file[1]="t1.sh"
    file[2]="t2.sh"
    file[3]="t3.sh"
    file[4]="t4.sh"
    file[5]="t5.sh"
    file[6]="t6.sh"

    for (( i=1 ; $i<=6 ; i=$(( $i+1 )) )) ; do

    mkdir ${file[$i]}
    cd ${file[$i]}
    for (( j=1 ; $j<=6 ; j=$(( $j+1 )) )) ; do
    mkdir $j
    cp ../a.out $j
    cp ../gridpi.exe $j
    eval "sed 's\\/home/yrwu/2\/home/yrwu/2/${file[$i]}/$j\\g' ../excu.sh | sed 's\\grid-pi\grid-pi${file[$i]}$j\\g' > $j/excu.sh "
    eval "qsub $j/excu.sh"
    done
    cd ..
    done