Something More for Research

Explorer of Research #HEMBAD

Archive for the ‘Cryptography’ Category

Building Static zlib v1.2.7 with MSVC 2012

Posted by Hemprasad Y. Badgujar on January 22, 2015


This post will explain how to obtain and build the zlib C programming library from source, using MS Visual Studio 2012 on Windows 7. The result will be a static release version that you can use in your C/C++ projects for data compression, or as a dependency for other libraries.

Downloads

The Environment

  1. Decompress anduntar the library with7zip and you’ll end up with a directory path similar to this:
    1
    C:\Users\%USERNAME%\Downloads\lib\zlib-1.2.7\

Building

  1. Modify “libs\zlib-1.2.7\contrib\masmx86\bld_ml32.bat,” adding “/safeseh” to the following two lines.
    Before:

    1
    2
    ml /coff /Zi /c /Flmatch686.lst match686.asm
    ml /coff /Zi /c /Flinffas32.lst inffas32.asm

    After:

    1
    2
    ml /safeseh /coff /Zi /c /Flmatch686.lst match686.asm
    ml /safeseh /coff /Zi /c /Flinffas32.lst inffas32.asm
  2. Open the solution file that came with the package, “libs\zlib-1.2.7\contrib\vstudio\vc10\zlibvc.sln,” and upgrade the solution file if necessary to MSVC 2012.
  3. Change to “Release” configuration.
  4. Remove “ZLIB_WINAPI;” from the “zlibstat” project’s property page: “Configuration Properties → C/C++ → Preprocessor → Preprocessor Definitions
  5. Build the solution.
  6. The new static library fileis created in a newsubfolder:
    1
    C:\Users\%USERNAME%\Downloads\lib\zlib-1.2.7\contrib\vstudio\vc10\x86\ZlibStatRelease\zlibstat.lib

Installing

  1. Create a place for the zlib library with “zlib” and “lib”subfolders.
    1
    2
    mkdir "C:\workspace\lib\zlib\zlib-1.2.7\zlib"
    mkdir "C:\workspace\lib\zlib\zlib-1.2.7\lib"
  2. Copy the header files.
    1
    xcopy "C:\Users\%USERNAME%\Downloads\lib\zlib-1.2.7\*.h" "C:\workspace\lib\zlib\zlib-1.2.7\zlib"
  3. Copy the library file.
    1
    xcopy "C:\Users\%USERNAME%\Downloads\lib\zlib-1.2.7\contrib\vstudio\vc10\x86\ZlibStatRelease\zlibstat.lib" "C:\workspace\lib\zlib\zlib-1.2.7\lib\zlibstat.lib"
  4. Add the include and lib paths to the default project property page in MSVC 2012:
    View → Other Windows → Property Manager → Release/Debug → Microsoft.Cpp.Win32.user
    .
    Be sure to save the property sheet so that the changes take effect.

Testing

  1. Create a new project, “LibTest” in MSVC 2012.
  2. Explicitly add the zlib library to the project: Project → Properties →Linker → Input → Additional Dependencies = “zlibstat.lib;”
  3. Create a source file in the project and copy the “zpipe.c” example code.

Build the project. It should compile and link successfully.

Potential Issues

These are some of the problems that you might run into while trying to build zlib.

LNK2026: module unsafe for SAFESEH image

Need to include support for safe exception handling. Modify “libs\zlib-1.2.7\contrib\masmx86\bld_ml32.bat,” adding “/safeseh” to the following two lines.
Before:

1
2
ml /coff /Zi /c /Flmatch686.lst match686.asm
ml /coff /Zi /c /Flinffas32.lst inffas32.asm

After:

1
2
ml /safeseh /coff /Zi /c /Flmatch686.lst match686.asm
ml /safeseh /coff /Zi /c /Flinffas32.lst inffas32.asm

LNK2001: unresolved external symbol _inflateInit_

The code is trying to link with the DLL version of the library instead of the static version. Remove “ZLIB_WINAPI;” from the “zlibstat” project’s property page: “Configuration Properties → C/C++ → Preprocessor → Preprocessor Definitions

Posted in C, Computer Languages, Computing Technology, Cryptography, Free Tools | Tagged: , , , | Leave a Comment »

GPU Parallel Programming in VS2012 with NVIDIA CUDA

Posted by Hemprasad Y. Badgujar on March 4, 2013


1. Introduction

Here I will share to you my first experience in creating a CUDA-based C++ program on Windows using Visual Studio 2012. CUDA is an acronym of Compute Unified Device Architecture, which is NVIDIA’s general purpose computing API for their graphics card hardware. This simple program is taken from the example code of NVIDIA’s samples, which is basically doing fill and copy operation with a big size matrix. Before continuing, you should have installed the required CUDA drivers, toolkits and SDK from here:
http://developer.nvidia.com/cuda/cuda-downloads

Or, if you’d rather choose to install the latest CUDA toolkit, head over here:
http://developer.nvidia.com/cuda/cuda-pre-production

You should also have a working C++ compiler. I am using Visual Studio 2012 on Windows 8 64-bit. Please be advised that CUDA-based applications won’t run unless the appropriate NVIDIA GPU hardware supporting CUDA is present in your system.

2. Setting up Visual Studio 2012

Basically everything should be set up automatically by the installer. However, with the current release of CUDA version 5.0, you might not be able to compile/build your project successfully. This is because nvcc.exe does not currently support the new cl.exe compiler version. If you try to compile any samples from the SDK there will be errors about target and props file not found or missing. For this, you should manually deploy those files according to the instructions from “C:\Program Files (x86)\NVIDIA GPU Computing Toolkit\CUDA\v5.0\extras\visual_studio_integration”

Those files still need some modifications for a successful compilation. You can download the modified files here: BuildCustomizations.rar. Extract the contents to the folder “C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V110\BuildCustomizations\”.
If you prefer to modify the files manually, follow these instructions carefully:

  1. Copy all the build customization files somewhere
  2. Open “CUDA 5.0.props”. Search for the following lines:
    Code:
    1
    2
    <CudaClVersion Condition="'$(PlatformToolset)' == 'v90'">2008</CudaClVersion>
    <CudaClVersion Condition="'$(PlatformToolset)' == 'v100'">2010</CudaClVersion>

    and add this new line:

    Code:
    1
    <CudaClVersion Condition="'$(PlatformToolset)' == 'v110'">2010</CudaClVersion>
  3. Open “CUDA 5.0.targets”. Search for the text “CudaCleanDependsOn” and replace the tag content with these lines:
    Code:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    <CudaCleanDependsOn>
      $(CudaCompileDependsOn);
      _SelectedFiles;
      CudaFilterSelectedFiles;
      AddCudaCompileMetadata;
      AddCudaLinkMetadata;
      AddCudaCompileDeps;
      AddCudaCompilePropsDeps;
      ValidateCudaBuild;
      ValidateCudaCodeGeneration;
      ComputeCudaCompileOutput;
      PrepareForCudaBuild
    </CudaCleanDependsOn>
  4. In the same file, search for “GenerateRelocatableDeviceCode”. Replace the line with the following:
    Code:
    1
    GenerateRelocatableDeviceCode="%(CudaCompile.GenerateRelocatableDeviceCode)"
  5. Go down a bit and look for “CodeGeneration”. Replace the line with this:
    Code:
    1
    CodeGeneration="%(CudaCompile.CodeGenerationValues)"
  6. Again search for “CommandLineTemplate”. It should be somewhere near the end of the file. Replace the line with this:
    Code:
    1
    CommandLineTemplate=""$(CudaToolkitNvccPath)" %(CudaCompile.BuildCommandLineTemplate) %(CudaCompile.ApiCommandLineTemplate) %(CudaCompile.CleanCommandLineTemplate)" />
  7. Copy all modified files here: “C:\Program Files (x86)\MSBuild\Microsoft.Cpp\v4.0\V110\BuildCustomizations\”

Also, modify line 90 of the file “host_config.h” located in the folder:
“C:\Program Files (x86)\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include\”
by changing the value ’1600′ to ’1700′.

Note: Remove the ‘x86′ inside paths if you use 64-bit CUDA toolkit

Syntax Highlighting

To have a fancy C++ syntax highlighting feature enabled, follow these steps:

  1. Select the menu “Tools->Options…”. Open “Text Editor” in the tree view on the left, and click on “File Extension”.
  2. Type “cu” in the “Extension” box, set the editor to “Microsoft Visual C++” and click “Add”. Click “OK” on the dialog box.
  3. Restart Visual Studio and your CUDA code should now have syntax highlighting.

3. Creating the App

Make sure you have installed all required SDKs. If everything is ok, then start by creating a simple console project and type this code:

Code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#include <iostream>
using namespace std;
__global__ void saxpy(int n, float a, float *x, float *y)
{
  int i = blockIdx.x*blockDim.x + threadIdx.x;
  if (i < n) y[i] = a*x[i] + y[i];
}
int main(void)
{
  int N = 1<<20;
  float *x, *y, *d_x, *d_y;
  x = (float*)malloc(N*sizeof(float));
  y = (float*)malloc(N*sizeof(float));
  cudaMalloc(&d_x, N*sizeof(float));
  cudaMalloc(&d_y, N*sizeof(float));
  for (int i = 0; i < N; i++) {
    x[i] = 1.0f;
    y[i] = 2.0f;
  }
  cudaMemcpy(d_x, x, N*sizeof(float), cudaMemcpyHostToDevice);
  cudaMemcpy(d_y, y, N*sizeof(float), cudaMemcpyHostToDevice);
  // Perform SAXPY on 1M elements
  saxpy<<<(N+255)/256, 256>>>(N, 2.0, d_x, d_y);
  cudaMemcpy(y, d_y, N*sizeof(float), cudaMemcpyDeviceToHost);
  float maxError = 0.0f;
  for (int i = 0; i < N; i++)
    maxError = max(maxError, abs(y[i]-4.0f));
  cout << "Max error: " << maxError;
}

Before compiling, make a reference to the CUDA library by specifying its location and name in the project’s properties page:

  1. Navigate to the “Configuration Properties\Linker\General” option
  2. In the “Additional Library Directories” field, add “$(CUDA_PATH)lib\$(PlatformName)”
  3. Go to the “Configuration Properties\Linker\Input” option
  4. Lastly in the “Additional Dependencies” field, add “cudart.lib”

The code should compile successfully.

Read more: http://blog.norture.com/2012/10/gpu-parallel-programming-in-vs2012-with-nvidia-cuda/#ixzz2MVFioDQt

Posted in Apps Development, C, Computer Games, Computer Languages, Computing Technology, Cryptography, CUDA, Game Development, GPU (CUDA), GPU Accelareted, PARALLEL | 2 Comments »

GPU Parallel Statistical and Cube Test Analysis of the SHA-3 Finalist Candidate Hash Functions

Posted by Hemprasad Y. Badgujar on August 27, 2012


GPU Parallel Statistical and Cube Test Analysis
of the SHA-3 Finalist Candidate Hash Functions

This page is under construction.

What’s New
Paper
Source Code
Analysis Results


What’s New

16-Feb-2012
Gave a presentation on this work at the 15th SIAM Conference on Parallel Processing for Scientific Computing (PP12).

28-Sep-2011
The Journal of Cryptographic Engineering declined to publish the paper.

14-Jul-2011
Posted the source code and analysis results.

13-Jul-2011
Posted the paper as submitted to the Journal of Cryptographic Engineering.


Paper

Alan Kaminsky. GPU parallel statistical and cube test analysis of the SHA-3 finalist candidate hash functions. July 13, 2011.

Abstract. The 256-bit versions of the SHA-3 finalist candidate hash functions — BLAKE, Grøstl, JH, Keccak, and Skein — were subjected to statistical tests to attempt to disprove the hypothesis that the output bits are uniformly distributed, independent, binary random variables. The hash functions were also subjected to cube tests to attempt to disprove the hypothesis that the superpoly bits are uniformly distributed, independent, binary random variables. The hash functions and test programs were implemented to run in parallel on a 448-core GPU supercomputer; the cube tests in particular require massive amounts of computation and are ideally suited for parallel implementation. Nonrandom behavior was observed at the 0.01 significance level in the BLAKE, JH, Keccak, and Skein hash functions. Nonrandom behavior was not observed at the 0.01 significance level in the Grøstl hash function.

Paper: jce2011.pdf (1,717,919 bytes)

SIAM PP12 Conference presentation: sha3pp12.pdf (1,288,487 bytes)


Source Code

Program source code archive: sha3test01.tgz (54,510 bytes)

Compiling the programs:

  1. Install the NVIDIA CUDA tools.
  2. Install the NVIDIA CUDA software development kit (optional).
  3. Download and unpack the program source code archive.
  4. make

Running the programs:

 

  1. A CUDA device of compute capability 2.0 or higher is required. 
  2. Each program’s command line arguments are documented in the comments at the top of the source file. 
  3. CUDA device 0 is used by default. To use another CUDA device, set the CUDA_DEVICE environment variable to the desired CUDA device number; for example:
    Bash: export CUDA_DEVICE=1
    Csh: setenv CUDA_DEVICE 1


Analysis Results

The following commands were run on a system with an NVIDIA Tesla C2050 card to produce the results in the journal paper.

BLAKE

$ ./BlakeEvalFunction func_1.dat 142857 2000 10240 > func_1.nohup.out
$ ./StatTest func_1.dat -report bit,pair > func_1.txt

Function sample inputs: func_1.nohup.out (199,291 bytes)
Function samples: func_1.dat (655,360,090 bytes)
Function sample analysis results: func_1.txt (1,569,213 bytes)

$ ./BlakeEvalSuperpoly superp_1.dat 999999 1 20 15 34 5 10240 > superp_1.nohup.out
$ ./StatTest superp_1.dat -report bit,pair > superp_1.txt

Superpoly sample inputs: superp_1.nohup.out (431,980 bytes)
Superpoly samples: superp_1.dat (655,360,105 bytes)
Superpoly sample analysis results: superp_1.txt (1,569,215 bytes)

Grøstl

$ ./GroestlEvalFunction func_1.dat 142857 2000 10240 > func_1.nohup.out
$ ./StatTest func_1.dat -report bit,pair > func_1.txt

Function sample inputs: func_1.nohup.out (199,294 bytes)
Function samples: func_1.dat (655,360,092 bytes)
Function sample analysis results: func_1.txt (1,569,213 bytes)

$ ./GroestlEvalSuperpoly superp_1.dat 999999 1 20 15 34 5 10240 > superp_1.nohup.out
$ ./StatTest superp_1.dat -report bit,pair > superp_1.txt

Superpoly sample inputs: superp_1.nohup.out (435,752 bytes)
Superpoly samples: superp_1.dat (655,360,107 bytes)
Superpoly sample analysis results: superp_1.txt (1,569,215 bytes)

JH

$ ./JHEvalFunction func_1.dat 142857 2000 10240 > func_1.nohup.out
$ ./StatTest func_1.dat -report bit,pair > func_1.txt

Function sample inputs: func_1.nohup.out (199,372 bytes)
Function samples: func_1.dat (655,360,087 bytes)
Function sample analysis results: func_1.txt (1,569,213 bytes)

$ ./JHEvalSuperpoly superp_1.dat 999999 1 20 15 34 5 10240 > superp_1.nohup.out
$ ./StatTest superp_1.dat -report bit,pair > superp_1.txt

Superpoly sample inputs: superp_1.nohup.out (432,372 bytes)
Superpoly samples: superp_1.dat (655,360,102 bytes)
Superpoly sample analysis results: superp_1.txt (1,569,215 bytes)

Keccak

$ ./KeccakEvalFunction func_1.dat 142857 2000 10240 > func_1.nohup.out
$ ./StatTest func_1.dat -report bit,pair > func_1.txt

Function sample inputs: func_1.nohup.out (199,304 bytes)
Function samples: func_1.dat (655,360,091 bytes)
Function sample analysis results: func_1.txt (1,569,221 bytes)

$ ./KeccakEvalSuperpoly superp_1.dat 999999 1 20 15 34 5 10240 > superp_1.nohup.out
$ ./StatTest superp_1.dat -report bit,pair > superp_1.txt

Superpoly sample inputs: superp_1.nohup.out (432,836 bytes)
Superpoly samples: superp_1.dat (655,360,106 bytes)
Superpoly sample analysis results: superp_1.txt (1,569,223 bytes)

Skein

$ ./SkeinEvalFunction func_1.dat 142857 2000 10240 > func_1.nohup.out
$ ./StatTest func_1.dat -report bit,pair > func_1.txt

Function sample inputs: func_1.nohup.out (199,272 bytes)
Function samples: func_1.dat (655,360,090 bytes)
Function sample analysis results: func_1.txt (1,569,213 bytes)

$ ./SkeinEvalSuperpoly superp_1.dat 999999 1 20 15 34 5 10240 > superp_1.nohup.out
$ ./StatTest superp_1.dat -report bit,pair > superp_1.txt

Superpoly sample inputs: superp_1.nohup.out (432,579 bytes)
Superpoly samples: superp_1.dat (655,360,105 bytes)
Superpoly sample analysis results: superp_1.txt (1,569,215 bytes)

Posted in Cryptography | 1 Comment »

Types Of Cryptographic Algorithms

Posted by Hemprasad Y. Badgujar on August 27, 2012


Types Of Cryptographic Algorithms – little drops @ thiyagaraaj.com.

Posted in Computer Network & Security, Cryptography | 3 Comments »

Algorithms

Posted by Hemprasad Y. Badgujar on August 26, 2012


List of Algorithms

A complete list of all major algorithms (300), in any domain. The goal is to provide a ready to run program for each one, or a description of the algorithm. Programming languages include Java, JavaScript and PHP, C, C++ either in direct form or generated from a Scriptol source.

Automata

  • Powerset construction. Algorithm to convert nondeterministic automaton to deterministic automaton.
  • Todd-Coxeter algorithm. Procedure for generating cosets.

Artificial intelligence

  • Alpha-beta. Alpha max plus beta min. Widely used in board games.
  • Ant-algorithms. The ant colony optimisation is a set of algorithms inspired by ant behavior to solve a problem, find the best path between two locations.
  • DE (Differential evolution)Solve the Chebyshev polynomial fitting problem.
  • Semi-Supervised Recognition of Sarcastic Sentences in Online Product Reviews.
    Algortithm that recognize sacarsms or irony in a tweet or an online document. A such algorithm will be essential for humanoid robots programming too.
  • Sentiment analysis. Actually a combination of algos, naive Bayes, maximum entropy and SVM (Support Vector Machine classifieur).

Computer vision

  • Epitome. Represent an image or video by a smaller one.
  • Counting objects in an image. Uses the connected-component labeling algorithm to first label each object, and count then the objects.
  • O’Carroll algorithm. From a mathematical conversion of insect vision, this algorithm evaluates how to get around avoiding objects.

Genetic algorithms

They uses three operator. selection (choose solution), reproduction (use choosen solutions to construct other ones), replacement (replace solution if better).

  • Fitness proportionate selection. Also known as roulette-wheel selection, is a function used for selecting solutions.
  • Truncation selection. Another method for selecting solutions, ordered by fitness.
  • Tournament selection. Select the best solution by a kind of tournament.
  • Stochastic universal sampling. The individuals are mapped to contiguous segments of a line, such that each individual’s segment is equal in size to its fitness exactly as in roulette-wheel selection.

Neural networks

  • Hopfield net. Recurrent artificial neural network that serve as content-addressable memory systems with binary threshold units. They converge to a stable state.
  • Backpropagation. Supervised learning technique used for training artificial neural networks.
  • Self-organizing map (Kohonen map). Neural networks trained using unsupervised learning to produce low dimensional (2D, 3D) representation of the training samples. Good for visualizing high-dimensional data.

Bioinformatics and Cheminformatics

  • Needleman-Wunsch. Performs a global alignment on two sequences, for protein or nucleotide sequences.
  • Smith-Waterman. Variation of the Needleman-Wunsch.
  • Ullmann’s algorithm for subgraph isomorphism solving. (1976). Determine if two graphs have isomorphic subgraphs.
    The maximum common subgraph isomorphism problem may be computed with a modular product graph.

Compression

Lossless compression algorithms

  • Burrows-Wheeler transform. Preprocessing useful for improving lossless compression.
  • Deflate. Data compression used by ZIP.
  • Delta encoding. Aid to compression of data in which sequential data occurs frequently.
  • Incremental encoding. Delta encoding applied to sequences of strings.
  • LZW. (Lempel-Ziv-Welch). Successor of LZ78. Builds a translation table from the data to compress. Is used by the GIF graphical format.
  • LZ77 and 78. The basis of further LZ variations (LZW, LZSS, …). They are both dictionary coders.
  • LZMA. Short for Lempel-Ziv-Markov chain-Algorithm.
  • LZO. Data compression algorithm that is focused on speed.
  • PPM (Prediction by Partial Matching). Adaptive statistical data compression technique based on context modeling and prediction.
  • Shannon-Fano coding. Constructs prefix codes based on a set of symbols and their probabilities.
  • Truncated binary. An entropy encoding typically used for uniform probability distributions with a finite alphabet. Improve binary encoding.
  • Run-length encoding. Primary compression that replaces a sequence of same code by the number of occurences.
  • Sequitur. Incremental grammar inference on a string.
  • EZW (Embedded Zerotree Wavelet). Progressive encoding to compress an image into a bit stream with increasing accuracy. May be lossy compression also with better results.

Entropy encoding

Coding scheme that assigns codes to symbols so as to match code lengths with the probabilities of the symbols .

  • Huffman coding. Simple lossless compression taking advantage of relative character frequencies.
  • Adaptive Huffman coding. Adaptive coding technique based on Huffman coding.
  • Arithmetic coding. Advanced entropy coding.
  • Range encoding. Same as arithmetic coding, but looked at in a slightly different way.
  • Unary coding. Code that represents a number n with n ones followed by a zero.
  • Elias deltagammaomega coding. Universal code encoding the positive integers.
  • Fibonacci coding. Universal code which encodes positive integers into binary code words.
  • Golomb coding. Form of entropy coding that is optimal for alphabets following geometric distributions.
  • Rice coding. Form of entropy coding that is optimal for alphabets following geometric distributions.

Lossy compression algorithms

  • Linear predictive coding. Lossy compression by representing the spectral envelope of a digital signal of speech in compressed form.
  • A-law algorithm. Standard companding algorithm.
  • Mu-law algorithm. Standard analog signal compression or companding algorithm.
  • Fractal compression. Method used to compress images using fractals.
  • Transform coding. Type of data compression for data like audio signals or photographic images.
  • Vector quantization. Technique often used in lossy data compression.
  • Wavelet compression. Form of data compression well suited for image and audio compression.

Cryptography

Secret key (symmetric encryption)

Use a secret key (or a pair of directly related keys) for both decryption and encryption.

  • Advanced Encryption Standard (AES), also known as Rijndael.
  • Blowfish. Designed by Schneier as a general-purpose algorithm, intended as a replacement for the aging DE.
  • Data Encryption Standard (DES), formerly DE Algorithm.
  • IDEA (International Data Encryption Algorithm). Formerly IPES (Improved PES), another replacement for DES. Is used by PGP (Pretty Good Privacy). Performs transformations on data splitted in blocks, using a key.
  • RC4 or ARC4. Stream cipher widely-used in protocols such as SSL for Internet traffic and WEP for wireless networks.
  • Tiny Encryption AlgorithmEasy to implement block cipher algorithme using some formulas.
  • PES (Proposed Encryption Standard). Older name for IDEA.

Public key (asymmetric encryption)

Use a pair of keys, designated as public key and private key. The public key encrypt the message, only the private key permits to decrypt it.

  • DSA (Digital Signature Algorithm). Generate keys with prime and random numbers. Was used by US agencies, and now public domain.
  • ElGamal. Based on Diffie-Hellman, used by GNU Privacy Guard software, PGP, and other cryptographic systems.
  • RSA (Rivest, Shamir, Adleman). Widely used in electronic commerce protocols. Use prime numbers.
  • Diffie-Hellman (Merkle) key exchange (or exponential key exchange). Method and algorithm to share secret over an unprotected communications channel. Used by RSA.
  • NTRUEncrypt. Make use of rings of polynomials with convolution multiplications.

Message digest functions

A message digest is a code resulting of the encryption of a string or data of any length, processed by a hash function.

  • MD5. Used for checking ISO images of CDs or DVDs.
  • RIPEMD (RACE Integrity Primitives Evaluation Message Digest). Based upon the principles of MD4 and similar to SHA-1.
  • SHA-1 (Secure Hash Algorithm 1)Most commonly used of the SHA set of related cryptographic hash functions. Was designed by the NSA agency.
  • HMAC. keyed-hash message authentication.
  • Tiger (TTH). Usually used in Tiger tree hashes.

Cryptographic using pseudo-random numbers

Techniques in cryptography

Secret sharing, Secret Splitting, Key Splitting, M of N algorithms.

  • Shamir’s secret sharing scheme. This is a formula based on polynomial interpolation.
  • Blakley’s secret sharing scheme. Is geometric in nature, the secret is a point in an m-dimensional space.

Other techniques and decryption

  • Subset sum. Given a set of integers, does any subset sum equal zero? Used in cryptography.
  • Shor’s algorithm. Quantum algorithm able to decrypt a code based on asymetric functions such as RSA.

Geometry

  • Gift wrapping. Determining the convex hull of a set of points.
  • Gilbert-Johnson-Keerthi distance. Determining the smallest distance between two convex shapes.
  • Graham scan. Determining the convex hull of a set of points in the plane.
  • Line segment intersection. Finding whether lines intersect with a sweep line algorithm.
  • Point in polygon. Tests whether a given point lies within a given.
  • Ray/Plane intersection.
  • Line/Triangle intersection. Particular case of Ray/Plane intersection.
  • Polygonization of implicit surfaces. Approximate an implicit surface with a polygonal representation.
  • Triangulation. Method to evaluate the distance to a point from angles to other points, whose distance is known.

Graphs

  • 3D Surface Tracker Technology. Process to add images on walls in a video while hidden surfaces are taken into account.
  • Bellman-Ford. Computes shortest paths in a weighted graph (where some of the edge weights may be negative).
  • Graph canonization. Find a canonical form of a graph that is isomorphic to another graph. Used in cheminformatics.
  • Dijkstra’s algorithm. Computes shortest paths in a graph with non-negative edge weights.
  • Perturbation methods. An algorithm that computes a locally shortest paths in a graph.
  • Floyd-Warshall. Solves the all pairs shortest path problem in a weighted, directed graph.
  • Floyd’s cycle-finding. Finds cycles in iterations.
  • Johnson. All pairs shortest path algorithm in sparse weighted directed graph.
  • Hopcroft–Karp algorithm. From a bipartite graph, returns the maximum number of edges with no common endpoints. Alternatives are  breadth-first and depth-first algos.
  • Kruskal. Finds a minimum spanning tree for a graph.
  • Prim’s. Finds a minimum spanning tree for a graph. Also called DJP , Jarník or Prim–Jarník algorithm.
  • Boruvka. Finds a minimum spanning tree for a graph.
  • Ford-Fulkerson. Computes the maximum flow in a graph.
  • Edmonds-Karp. Implementation of Ford-Fulkerson.
  • Nonblocking Minimal Spanning Switch. For a telephone exchange.
  • Woodhouse-Sharp. Finds a minimum spanning tree for a graph.
  • Spring based. Algorithm for graph drawing.
  • Hungarian. Algorithm for finding a perfect matching.
  • Coloring algorithm. Graph coloring algorithm.
  • Nearest neighbour. Find nearest neighbour.
  • Topological sort. Sort a directed acyclic graph in such a manner that each node comes before all nodes to which it has edges (according to directions).
  • Tarjan’s off-line least common ancestors algorithm. Compute lowest common ancestors for pairs of nodes in a tree.

Graphics

  • Bresenham’s line algorithm. Uses decision variables to plots a straight line between 2 specified points.
  • Colorization. A process for coloring a picture or video in black and white, with a few strokes to mark the colors. Examples.
  • Depixelizing Pixel Art. Smoothing algorithm that converts an image in coarse pixels into a realistic picture. (Johannes Kopf and Dani Lischinski). DemonstrationArchive of demos.
  • Landscape Draw a 3D scenery.
  • DDA line algorithm. Uses floating-point math to plots a straight line between 2 specified points.
  • Flood fill. Fills a connected region with a color.
  • Image Restoring. Restore photo, improve images.
  • Xiaolin Wu’s line algorithm. Line antialiasing.
  • Painter’s algorithm. Detects visible parts of a 3-dimensional scenery.
  • Ray tracing. Realistic image rendering.
  • Phong shading. An illumination model and an interpolation method in 3D computer graphics.
  • Gouraud shading. Simulate the differing effects of light and colour across the surface of a 3D object.
  • Scanline rendering. Constructs an image by moving an imaginary line.
  • Global illumination. Considers direct illumination and reflection from other objects.
  • Interpolation. Constructing new data points such as in digital zoom.
  • Resynthesizer. Remove an object on a photo and rebuild the background Used by Photoshop and The Gimp. Resynthesizer tutorial .
  • Slope-intercept algorithm. It is an implementation of the slope-intercept formula for drawing a line.
  • Spline interpolation. Reduces error with Runge’s phenomenon.
  • 3D Surface Tracker Technology. Adding images or vidéo on walls in a vidéo, hidden surfaces being taken into account.

Lists, arrays and trees

Searching

  • Dictionary search. See predictive search.
  • Selection algorithm. Finds the kth largest item in a list.
  • Binary search algorithm. Locates an item in a sorted list.
  • Breadth-first search. Traverses a graph level by level.
  • Depth-first search. Traverses a graph branch by branch.
  • Disjoint-set data structure and algorithm. With for application, building a maze.
  • Best-first search. Traverses a graph in the order of likely importance using a priority queue.
  • A* tree search. Special case of best-first search that uses heuristics to improve speed.
  • Uniform-cost search. A tree search that finds the lowest cost route where costs vary.
  • Predictive search. Binary like search which factors in magnitude of search term versus the high and low values in the search.
  • Hash table. Associate keys to items in an unsorted collection, to retrieve them in a linear time.
  • Interpolated search. See predictive search.
  • Skip list. Structure composed of linked lists for a quicker access, and algorithm or search/insertion.

Sorting

  • Binary tree sort. Sort of a binary tree, incremental, similar to insertion sort.
  • Bogosort. Inefficient random sort of a desk card.
  • Bubble sort. For each pair of indices, swap the items if out of order.
  • Bucket sort. Split a list in buckets and sort them individually. Generalizes pigeonhole sort.
  • Cocktail sort (or bidirectional bubble, shaker, ripple, shuttle, happy hour sort). Variation of bubble sort that sorts in both directions each pass through the list.
  • Comb sort. Efficient variation of bubble sort that eliminates “turtles”, the small values near the end of the list and makes use of gaps bewteen values.
  • Counting sort. It uses the range of numbers in the list A to create an array B of this length. Indexes in B are used to count how many elements in A have a value less than i.
  • Gnome sortSimilar to insertion sort except that moving an element to its proper place is accomplished by a series of swaps, as in bubble sort.
  • Heapsort. Convert the list into a heap, keep removing the largest element from the heap and adding it to the end of the list.
  • Insertion sort. Determine where the current item belongs in the list of sorted ones, and insert it there.
  • Introsort. Or introspective sort. It begins in quicksort and switches to heapsort at certain recursion level.
  • Merge sort. Sort the first and second half of the list separately, then merge the sorted lists.
  • Pancake sort. Reverse elements of some prefix of a sequence.
  • Pigeonhole sort. Fill an empty array with all elements of an array to be sorted, in order.
  • Postman sort. Hierarchical variant of bucket sort, used by post offices.
  • Quicksort. Divide list into two, with all items on the first list coming before all items on the second list.; then sort the two lists. Often the method of choice.
  • Radix sort. Sorts keys associated to items, or integer by processing digits.
  • Selection sort. Pick the smallest of the remaining elements, add it to the end of the sorted list.
  • Shell sort. Improves insertion sort with use of gaps between values.
  • Smoothsort. See heapsort.
  • Stochastic sort. See bogosort.

Merging

  • Simple Merge. Merge n sorted streams into one output stream. All the stream heads are compared, and the head with the least key is removed and written to the output.
  • k-way Merge sort (or p-way. A merge sort that sorts a data stream using repeated merges.

Logic programming

  • Davis–Putnam algorithm. Checks the validity of a first-order formula.

Mathematics

Algebra

  • Buchberger’s algorithm. Finds a Gräbner basis.
  • Extended Euclidean algorithm. Solves the equation ax+by= c.
  • Fourier transform multiplication. For very big numbers, computing the fast Fourier transforms for two numbers, and multiplying the two results entry by entry.
  • Gram-Schmidt process. Orthogonalizes a set of vectors.
  • Gauss-Jordan elimination. Solves systems of linear equations.
  • Karatsuba multiplication. Recursive algorithm efficient for big numbers. Derived from the Toom-Cook method.
  • Knuth-Bendix completion. For rewriting rule systems.
  • Multivariate division. For polynomials in several indeterminates.
  • Risch algorithm. Translates indefinite integral to algebraic problem.
  • Toom-Cook (Toom3). Splits each number to be multiplied into multiple parts.

Eigenvalue algorithm

Algorithms to find the Eigenvalue and/or Eigenvector of a matrix.

  • QR algorithm. A popular method based on the QR decomposition.
  • Inverse iteration. Iterative eigenvalue algorithm.
  • Rayleigh quotient iteration. Extends the principle of the inverse iteration by using the Rayleigh quotient to obtain increasingly accurate eigenvalue estimates.
  • Arnoldi iteration. Compute the eigenvalues of the orthogonal projection of A onto the Krylov subspace.
  • Lanczos iteration. Method to find a zero vector in the process of the quadratic sieve.
  • Jacobi method. Numerical procedure for the calculation of all eigenvalues and eigenvectors of a real symmetric. matrix
  • Bisection.
  • Divide-and-conquer. Apply to real symmetric matrices.

Eigenvector algorithms

  • Richardson eigenvector algorithm.
  • Max-Plus. Eigenvector algorithm for nonlinear H 1 control.
  • Abrams and Lloyd eigenvector algorithm.

Arithmetic

  • Binary GCD algorithm. Efficient way of calculating greatest common divisor.
  • Booth’s multiplication. Multiply two signed numbers in two’s complement notation.
  • Euclidean algorithm. Computes the greatest common divisor.
  • Binary multiplication (Peasant or Egyptian multiplication). Decomposes the larger multiplicand into a sum of powers of two and creates a table of doublings of the second multiplicand.

Discrete logarithm in group theory

  • Baby-step giant-step. This is a series of well defined steps to compute the discrete logarithm.
  • Pollard’s rho algorithm for logarithms. Analogous to Pollard’s rho algorithm for integer factorization but solves the discrete logarithm problem.
  • Pohlig-Hellman algorithm. Solves the problem for a multiplicative group whose order is a smooth integer. Based on the Chinese remainder theorem and runs in polynomial time.
  • Index calculus algorithm. Best known algorithm for certain groups, as the multiplicative group modulo m.

Integer factorization

Breaking an integer into its prime factors . Also named prime factorization.

  • Fermat’s factorization method. A representation of an odd integer as the difference of two squares.
  • Trial division. The simplest of the integer factorization algorithms. Try to divide the integer n by every prime number.
  • Lenstra elliptic curve factorization or elliptic curve factorization method (ECM). Fast, sub-exponential running time, employs elliptic curves.
  • Pollard’s rho . Variation of Pollard’s p-1 that is effective at splitting composite numbers with small factors.
  • Pollard’s p-1. A special-purpose algorithm, that is only suitable for integers with specific types of factors.
  • Congruence of squares. Finding a congruence of squares modulo n is a mean to to factor the integer n.
  • Quadratic sieve. Uses the idea of Dixon’s method. It is a general-purpose algorithm that is simpler than the number field sieve and the fastest for integers under 100 decimal digits.
  • Dixon’s factorization method. General-purpose integer factorization algorithm.
  • Special number field sieve. Special-purpose algorithm ideal for Fermat numbers.
  • General number field sieve (GNS). Derived from special number field sieve. Efficient algorithm known for factoring big integers. Uses steps to factor the integer.

Prime test

Determining whether a given number is prime.

  • AKS primality test (Agrawal-Kayal-Saxena). The first published algorithm to be simultaneously polynomial, deterministic, and unconditional. Generalization of Fermat’s theorem, extended to polynomials.
  • Fermat primality test. Rely on an equality or set of equalities that hold true for prime values, and then see whether or not they hold for the number to test.
  • Miller-Rabin primality test. Similar to the Fermat primality test. Unconditional probabilistic algorithm.
  • Sieve of Eratosthenes. Ancient algorithm for finding all prime numbers up to a specified integer.
  • Sieve of Atkin. Optimized version of the sieve of Eratosthenes.
  • Solovay-Strassen primality test. Same principle as the Fermat test.

Numerical

  • Fibonacci. Calculating the sequence of Fibonacci.
  • Biconjugate gradient method. Solves systems of linear equations.
  • Dancing Links. Finds all solutions to the exact cover problem.
  • De Boor algorithm. Computes splines.
  • De Casteljau’s algorithm. Computes Bezier curves.
  • False position method. Approximates roots of a function.
  • Gauss-Legendre. Computes the digits of pi.
  • Kahan summation. A more accurate method of summing floating-point numbers.
  • MISER. Monte Carlo simulation, numerical integration.
  • Newton’s method. Finds zeros of functions with calculus.
  • Rounding functions. The classic ways to round numbers.
  • Secant method. Approximates roots of a function.
  • Shifting nth-root. Digit by digit root extraction.
  • Square root. Approximates the square root of a number.
  • Borwein’s algorithm. Calculates the value of 1/e.
  • Metropolis-Hastings. Generate a sequence of samples from the probability distribution of one or more variables.

Matrix processing

  • Exponentiating by squaring. Quickly computes powers of numbers and matrices.
  • Rutishauser. Algorithm for tridiagonalizing banded matrices. Uses the standard chasing step.
  • Strassen algorithm. Faster matrix multiplication.
  • Symbolic Cholesky decomposition. Efficient way of storing sparse matrix.
  • Zha’s algorithm. For tridiagonalizing arrowhead matrices, improves Rutishauser.
  • Matrix chain multiplication. Given a sequence of matrices, we want to find the most efficient way to multiply these matrices together using dynamic programming (not to perform the multiplication).

Optic

  • Gerchberg Saxton. algorithm for the determination of the phase from image and diffraction plane pictures.

Optimization

  • Ant colony optimization. Probabilistic technique for solving problems which can be reduced to finding good paths through graphs.
  • BFGS (Broyden-Fletcher-Goldfarb-Shanno method). Solves a unconstrained nonlinear optimization problem.
  • Branch and bound. Method to find optimal solutions of discrete and combinatorial optimization problems.
  • Conjugate gradient method. Iterative algorithm for the numerical solution of systems of linear equations, whose matrix is symmetric and positive definite.
  • Evolution strategy. Technique based on ideas of adaptation and evolution. Operators are. mating selection, recombination, mutation, fitness function evaluation, and environmental selection.
  • Gauss-Newton. An algorithm for solving nonlinear least squares problems.
  • Gradient descent. Approaches a local minimum of a function by taking steps proportional to the negative of the gradient (or the approximate gradient) of the function at the current point.
  • Gradient ascent. Approaches a local maximum of a function, as gradient descent but one takes steps proportional to the gradient.
  • Levenberg-Marquardt. Numerical solution to the problem of minimizing a sum of squares of several, generally nonlinear functions that depend on a common set of parameters.
  • Line search. Iterative approaches to find a local minimum of an objective function in unconstrained optimization.
  • Local search. Metaheuristic for solving hard optimization problems as maximizing a criterion among a number of candidate solutions.
  • Nelder-Mead method (downhill simplex method). A nonlinear optimization algorithm.
  • Newton’s method in optimization. The same algorithm to find roots of equations in one or more dimensions can also be used to find local maxima and local minima of functions.
  • PSO, Particle swarm optimization. Swarm intelligence modeled by particles in multidimensional space that have a position and a velocity.
  • Random-restart hill climbing. Meta-algorithm built on top of the hill climbing optimization algorithm.
  • Simplex algorithm. An algorithm for solving the linear programming problem
  • Simulated annealing. Generic probabilistic meta-algorithm for the global optimization problem, inspirated by annealing in metallurgy.
  • Steepest descent. see gradient descent.
  • Stochastic tunneling. Approach to minimize a function based on the Monte Carlo method-sampling.
  • Tabu search. optimization method of search algorithm by using memory structures.
  • Trust search. Another iterative approaches to find a local minimum of an objective function in unconstrained optimization.

Parsing

  • CYK (Cocke-Younger-Kasami). An efficient O(n3) algorithm for parsing any CNF context-free grammar.
  • Earley’s algorithm. A chart parser, O(n3) algorithm for parsing any context-free grammar.
  • Inside-outside. An O(n3) algorithm for re-estimating production probabilities in probabilistic context-free grammars.

LL Parsers

Parse a LL context-free grammar top-down from left to right.
Such as ANTLR that is LL(*).

LR Parsers

Bottom-up parsers for context-free grammars.

  • Dijkstra’s shunting yard algorithm is commonly used to implement operator precedence parsers which convert from infix notation to Reverse Polish notation (RPN).
  • LALR (Look-ahead LR). With a one-token look-ahead. Yacc/Bison use LALR(1)
  • SLR (Simple LR) parser. An LR(0) modified to prevent shift-reduce and reduce-reduce conflits. Remains inferior to LR(1).
  • Canonical LR parser or LR(1) parser. Has a look-ahead of one token.
  • GLR. (Generalyzed LR parser) by Masaru Tomita. An extension of an LR to handle nondeterministic or ambiguous grammars. It is efficient to parse natural language.

Recursive Descent Parsers

Top-down parsers built from a set of mutually-recursive procedures that represent the production rules of the grammar.

  • Packrat parser. A linear time parsing algorithm supporting context-free LL(k) grammars. Use backup and memoization (remembering its choices) to avoid non-termination.

Prediction (statistics)

  • Baum-Welch. Finds the unknown parameters of a Hidden Markov Model (HMM). It makes use of the forward-backward algorithm.
  • Viterbi. Calculates the Viterbi path, a sequence of states that is most luckily to appear in a sequence of event.

Quantum

Application of quantum computation to various categories of problems

  • Grover’s algorithm. Provides quadratic speedup for many search problems.
  • Shor’s algorithm. Provides exponential speedup for factorizing a number.
  • Deutsch-Jozsa. Criterion of balance for Boolean function.

(Pseudo) Random number generators

  • Blum Blum Shub. Based on a formula on prime numbers.
  • Mersenne twister. By Matsumoto Nishimura, fast and with high period.
  • Lagged Fibonacci generator. Improvement of Linear congruential generator, uses the Fibonacci sequence.
  • Linear congruential generator. One of oldest, not the best, use three numbers to generate a sequence.
  • Yarrow algorithm. By Bruce Schneier, John Kelsey, and Niels Ferguson. Cryptographically secure pseudorandom numbers generator, can also be used as a real random number generator, accepting random inputs from analog random sources.
  • Fortuna. Allegedly an improvement on Yarrow algorithm.
  • Linear feedback shift register. A shift register whose input bit is a linear function of its previous state. The first state is the seed.

Sciences

Astronomy

  • Ephemerides.
  • Positions of Moon or other celestial objects.
  • Julian day. Number of days that have elapsed since Monday, January 1, 4713 BC in the proleptic Julian calendar. The algorithm is a formula. Variations are: heliocentric, chronological, modified, reduced, truncated, Dublin Lilian julian day.
  • Julian date. The Julian day, not rounded, decimal fraction.

Medical

  • Computation useful in healthcare.
  • Help to diagnosis.

Signal processing

  • CORDIC. Fast trigonometric function computation technique.
  • Rainflow-counting algorithm. Reduces a complex stress history to a count of elementary stress-reversals for use in fatigue analysis.
  • Osem. Algorithm for processing of medical images.
  • Goertzel algorithm. Can be used for DTMF digit decoding.
  • Discrete Fourier transform. Determines the frequencies contained in a (segment of a) signal.
    • Fast Fourier transform
    • Cooley-Tukey FFT
    • Rader’s FFT
    • Bluestein’s FFT
    • Bruun’s FFT
    • Prime-factor FFT
  • Richardson-Lucy deconvolution. Image de-blurring algorithm.
  • Elser Difference-Map. X-Ray diffraction microscopy.

Software engineering

  • Algorithms for Recovery and Isolation Exploiting Semantics. Recovery.
  • Unicode Collation. Provides a standard way to put names, words or strings of text in sequence.
  • CHS conversion. Converting between disk addressing systems.
  • Cyclic redundancy check. Calculation of a check word.
  • Parity control. Simple/fast error detection technique. Is a number even or odd?

Memory allocation

  • Boehm garbage collector. Conservative garbage collector.
  • Buddy memory allocation. Algorithm to allocate memory such that fragmentation is less.
  • Generational garbage collector. Fast garbage collectors that segregate memory by age.
  • Mark and sweep.
  • Reference counting. Simple memory manager that counts links to data and reclaims the space when the count is zero.

Distributed systems

  • Lamport ordering. A partial ordering of events based on the happened-before relation.
  • Snapshot. A snapshot is the process of recording the global state of a system.
  • Vector clocks. A total ordering of events.
  • Marzullo. Distributed clock synchronization.
  • Intersection. Another clock agreement algorithm.

Operating systems algorithms

  • Banker. Algorithm used for deadlock avoidance.
  • Page replacement. Selecting the victim page under low memory conditions.
  • Bully. Selecting new leader among many computers.

Disk scheduling algorithms.

  • Elevator. Disk scheduling algorithm that works like an elevator.
  • Shortest seek first. Disk scheduling algorithm to reduce seek time.

Process synchronisation algorithms.

  • Peterson. Allows two processes to share a single-use resource without conflict, using shared memory for communication. Can be generalized.
  • Lamport’s Bakery algorithm. Improve the robustness of multiple thread-handling processes by means of mutual exclusion.
  • Dekker. Another concurrent programming algorithm, as the Peterson’s one.

Scheduling algorithms

  • Earliest deadline first scheduling. When an event occurs (end of task, new task released, etc.) the queue will be searched for the process closest to its deadline.
  • Fair-share scheduling. Sharing cpu time between groups and users in groups. Another algorithm is called recursively to manage sharing of processes.
  • Least slack time scheduling or Least Laxity First. Assigns priority based on the slack time (difference between the deadline, ready and run time) of a process.
  • List scheduling. From an ordered list of processes with priorities, assign first to highest priority the available resources. Possible strategies: critical path, longest path, highest level first, longest processing time.
  • Multi level feedback queue.
  • Rate-monotonic scheduling. Optimal, preemptive, static-priority scheduling algorithm. Priority given in rate monotonic principle (first deadline is first processed).
  • Round-Robin scheduling. Simplest algorithm, assigns time slices to each process without priority.
  • Shortest job next (or first). Executes next the waiting process with the smallest execution time, is non-preemptive.
  • Shortest remaining time. A version of shortest job next scheduling that terminates the running process before to choose another task.

Texts

Searching

  • Aho-Corasick. Search in a text by building a table from words.
  • Bitap (or shift-or, shift-and, Baeza-Yates-Gonnet). Fuzzy string searching algorithm developed by Udi Manber and Sun Wu.
  • Boyer-Moore string search. Search in text by skipping sub-string not containing letters in the searched input.
  • Burrows Wheeler transform. String transformation that may be used to search words in a text faster.
  • Knuth-Morris-Pratt. Build a table when searching to skip sub-string.
  • Rabin-Karp string searchUse hashing for multiple searches.
  • Longest common subsequence problem. Haskell’s algorithm. Of two sequences.
  • Longest increasing subsequence problem. Of two sequences. It also reduces to find the longest path in a directed acyclic graph.
  • Shortest common supersequence. Of two sequences.
  • Horspool. Simplification of the Boyer-Moore algorithm. O(mn).

Approximate matching

  • Levenshtein distance (or edit distance). Minimum number of operations (insertion, deletion, replacement) needed to transform one string into the other.
  • Soundex. Phonetic algorithm for indexing words by their sound (in English).
  • Metaphone. Indexing words by their sound (in English).
  • NYSIIS. (New York State Identification and Intelligence System). Phonetic algorithm that improves soundex.

Word processing

  • Latent Dirichlet Allocation (LDA). Analysis of documents to associate the content with a topic. Used by search engines.
  • Latent Semantic Indexing (LSI). Automation of methods to attach a text to a topic from the words that occur commonly in this context.
  • Stemming. A method of reducing words to their stem, or root form.

Utilities

  • Doomsday. Day of the week.
  • Xor swap. Swaps the values of two variables without using a buffer by xoring the values.
  • Hamming weight. Find the number of 1 bits in a binary word.
  • Luhn. A checksum formula for validating identification numbers such as credit-card numbers.
  • Create bit mask. Bit manipulation algorithms.

Misc.

  • BrowseRank. Alternative to PageRank based on users behavior.
  • Hypertext Induced Topic Selection (HITS, patent in 1997). Algorithm for scoring Web pages, by Jon Kleinberg. One score depends upon backlinks, the other one is based on external links.
  • Leaf shape. From 28 parameters, the Growth-Algorithm Model of Leaf Shape can reproduce all the shape of real leaves found in the nature.
  • PageRank. (1998) Algorithm of scoring by Larry Page and Sergey Brin (Google), using backlinks and external links. The score of a Web page depends also to various other criteria.
  • Schreier-Sims. For permutation groups. A method of computing a Base and Strong Generating Set (BSGS) of a permutation group. Used by algebra algorithms.
  • Robinson-Schensted. Combinatorial algorithm.

Links

Posted in Computer Network & Security, Cryptography, User Authentication | 33 Comments »

 
Extracts from a Personal Diary

dedicated to the life of a silent girl who eventually learnt to open up

Num3ri v 2.0

I miei numeri - seconda versione

ThuyDX

Just another WordPress.com site

Algunos Intereses de Abraham Zamudio Chauca

Matematica, Linux , Programacion Serial , Programacion Paralela (CPU - GPU) , Cluster de Computadores , Software Cientifico

josephdung

thoughts...

Tech_Raj

A great WordPress.com site

Travel tips

Travel tips

Experience the real life.....!!!

Shurwaat achi honi chahiye ...

Ronzii's Blog

Just your average geek's blog

Karan Jitendra Thakkar

Everything I think. Everything I do. Right here.

VentureBeat

News About Tech, Money and Innovation

Chetan Solanki

Helpful to u, if u need it.....

ScreenCrush

Explorer of Research #HEMBAD

managedCUDA

Explorer of Research #HEMBAD

siddheshsathe

A great WordPress.com site

Ari's

This is My Space so Dont Mess With IT !!

%d bloggers like this: