http://ia.topicmaker.com/manager/businesspc/20070613/1.html
http://en.wikipedia.org/wiki/Chain_of_trust
http://www.openvirtualization.org/open-source-arm-trustzone.html
2012/10/23
2012/10/22
Crypto Knowledge
2.1 Random Number Generator (RNG)
Description
The purpose is to generating
good random number.
Example Scenario
DTCPIP: AKE Challenge nonce
A(n),B(n) generation.
DTCPIP: Exchange Key
Generation.
OpenSSL Interface
RAND_seed
RAND_bytes
Reference
2.2 EC-DSA
Description
Elliptic Curve Digital
Signature Algorithm:
User A uses a private
a key to signature some data1 generating data1sig, User B uses public key to
verify if data1sig is truly the signature of data1 signed by A.
Example Scenario
DTCPIP: verifying
DTCPIP certification using DTLA public key
OpenSSL Interface
SHA1
ECDSA_SIG_new
EC_KEY_set_group ( the
parameter is a group of constant data )
EC_KEY_set_private_key
(Not used for verification)
EC_KEY_set_public_key
(Not used for signature)
ECDSA_do_verify
ECDSA_do_sign
Reference
2.3 EC-DH
Description
Elliptic curve
Diffie–Hellman ,allows two parties, each having an elliptic curve
public-private key pair, to establish a shared secret over an insecure channel.
Example Scenario
DTCPIP: generating the
Auth Key after certification key exchange.
CPSDK:
MDTCPIPAKEAbstract__makeMyFirstPhaseValue, MDTCPIPUtil__getSharedSecret
OpenSSL Interface
EC_KEY_new
EC_KEY_generate_key
(Create a random private/public key pair in the curve)
EC_KEY_get0_public_key
EC_KEY_get0_private_key
MDTCPIPUtil__ECPointToBuf
MDTCPIPUtil__BNToBuf
EC_POINT_new
EC_POINT_oct2point
EC_KEY_set_group
EC_KEY_set_private_key
ECDH_compute_key
Reference
http://en.wikipedia.org/wiki/Elliptic_curve_Diffie%E2%80%93Hellman
2.5 AES CBC Mode
Description
The Advanced
Encryption Standard (AES) is a symmetric-key encryption standard adopted by the
U.S. government. CBC (Cipher Block Chaining) mode is mainly used for media
content.
Example Scenario
DTCPIP: Content encryption and decryption
AACS content
encryption and decryption
OpenSSL Interface
EVP_aes_128_cbc
EVP_CipherInit_ex
EVP_CIPHER_CTX_set_padding
EVP_EncryptUpdate
EVP_DecryptUpdate
Reference
http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
2.6 AES ECB Mode
Description
The Advanced
Encryption Standard (AES) is a symmetric-key encryption standard adopted by the
U.S. government. ECB (Electronic Codebook) mode of operation is maily for
perposes such as management of cryptographic keys.
The disadvantage of
this method is that identical plaintext blocks are encrypted into identical
ciphertext blocks; thus, it does not hide data patterns well
Example Scenario
DTCPIP: Content key generation
OpenSSL Interface
EVP_aes_128_ecb
EVP_CipherInit_ex
EVP_CIPHER_CTX_set_padding
EVP_EncryptUpdate
EVP_DecryptUpdate
Reference
http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
2.7 AES CTR Mode
Description
The Advanced
Encryption Standard (AES) is a symmetric-key encryption standard adopted by the
U.S. government. CTR(Counter) mode turns a block cipher into a stream cipher.
It generates the next keystream block by
encrypting successive values of a "counter". The counter can be any
function which produces a sequence which is guaranteed not to repeat for a long
time, although an actual counter is the simplest and most popular.
Example Scenario
PlayReady: For ASF
package encryption and decryption.
OpenSSL Interface
NA
Reference
2.8 SHA-1
Description
Secure Hash Algorithm,
for the purpose of processing data to produce digital signatures.
Example Scenario
DTCPIP: used to
generate the digest of the buffer to verify the signature and to generate the
signature
OpenSSL Interface
SHA1
Reference
2.9 MAC
Description
Message Authentication
code. For purpose of protecting the integrity of information.
There are many methods
to generate MAC. One of mostly used is
SHA-1algorithm.
Example Scenario
DTCPIP: using SHA1 method
to generate MAC code during RTT verification.
OpenSSL Interface
SHA1
Reference
2012/10/17
Kernel Message and Hotplug
http://www.kernel.org/doc/pending/hotplug.txt
The hotplug mechanism asynchronously notifies userspace when hardware is inserted, removed, or undergoes a similar significant state change. Linux provides two interfaces to hotplug; the kernel can spawn a usermode helper process, or it can send a message to an existing daemon listening to a netlink socket. -- Usermode helper The usermode helper hotplug mechanism spawns a new process to handle each hotplug event. Each such helper process belongs to the root user (UID 0) and is a child of the init task (PID 1). The kernel spawns one process per hotplug event, supplying environment variables to each new process describing that particular hotplug event. By default the kernel spawns instances of "/sbin/hotplug", but this default can be changed by writing a new path into "/proc/sys/kernel/hotplug" (assuming /proc is mounted). A simple bash script to record variables from hotplug events might look like: #!/bin/bash env >> /filename It's possible to disable the usermode helper hotplug mechanism (by writing an empty string into /proc/sys/kernel/hotplug), but there's little reason to do this unless you want to disable an existing hotplug mechanism. (From a performance perspective, a usermode helper won't be spawned if /sbin/hotplug doesn't exist, and negative dentries will record the fact it doesn't exist after the first lookup attempt.) -- Netlink A daemon listening to the netlink socket receives a packet of data for each hotplug event, containing the same information a usermode helper would receive in environment variables.
A daemon listening to the netlink socket receives a packet of data for each hotplug event, containing the same information a usermode helper would receive in environment variables. The netlink packet contains a set of null terminated text lines. The first line of the netlink packet combines the $ACTION and $DEVPATH values, separated by an @ (at sign). Each line after the first contains a KEYWORD=VALUE pair defining a hotplug event variable. Here's a C program to print hotplug netlink events to stdout: #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/poll.h> #include <sys/socket.h> #include <sys/types.h> #include <unistd.h> #include <linux/types.h> #include <linux/netlink.h> void die(char *s) { write(2,s,strlen(s)); exit(1); } int main(int argc, char *argv[]) { struct sockaddr_nl nls; struct pollfd pfd; char buf[512]; // Open hotplug event netlink socket memset(&nls,0,sizeof(struct sockaddr_nl)); nls.nl_family = AF_NETLINK; nls.nl_pid = getpid(); nls.nl_groups = -1; pfd.events = POLLIN; pfd.fd = socket(PF_NETLINK, SOCK_DGRAM, NETLINK_KOBJECT_UEVENT); if (pfd.fd==-1) die("Not root\n"); // Listen to netlink socket if (bind(pfd.fd, (void *)&nls, sizeof(struct sockaddr_nl))) die("Bind failed\n"); while (-1!=poll(&pfd, 1, -1)) { int i, len = recv(pfd.fd, buf, sizeof(buf), MSG_DONTWAIT); if (len == -1) die("recv\n"); // Print the data to stdout. i = 0; while (i<len) { printf("%s\n", buf+i); i += strlen(buf+i)+1; } } die("poll\n"); // Dear gcc: shut up. return 0; }
2012/10/07
SoC Physical Address
How Physical Address is decided?
Refer to AHB bus specification
An example slave
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0479b/BABBDJBB.html
Slave Multiplexer ( & AHB Decoder)
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0479b/BABBDJBB.html
All the SoC peripheral Addressing logic is in the AHB address decoder and multiplexer.
About Chip Select
http://users.cis.fiu.edu/~downeyt/cda4101/chipselect.html
Refer to AHB bus specification
An example slave
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0479b/BABBDJBB.html
Slave Multiplexer ( & AHB Decoder)
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0479b/BABBDJBB.html
All the SoC peripheral Addressing logic is in the AHB address decoder and multiplexer.
About Chip Select
http://users.cis.fiu.edu/~downeyt/cda4101/chipselect.html
2012/10/05
2012/10/04
Linux Memory
Memory Map
http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
Once virtual addresses are enabled, they apply to all software running in the machine, including the kernel itself.
In Linux, kernel space is constantly present and maps the same physical memory in all processes. Kernel code and data are always addressable, ready to handle interrupts or system calls at any time. By contrast, the mapping for the user-mode portion of the address space changes whenever a process switch happens.
It is also possible to create an anonymous memory mapping that does not correspond to any files, being used instead for program data. In Linux, if you request a large block of memory via malloc(), the C library will create such an anonymous mapping instead of using heap memory.
You can examine the memory areas in a Linux process by reading the file /proc/pid_of_process/maps.
http://duartes.org/gustavo/blog/post/how-the-kernel-manages-your-memory
http://duartes.org/gustavo/blog/post/page-cache-the-affair-between-memory-and-files
http://stackoverflow.com/questions/116343/what-is-the-difference-between-vmalloc-and-kmalloc
http://www.scs.ch/~frey/linux/memorymap.html
Memory Barrier
http://stackoverflow.com/questions/1787450/how-do-i-understand-read-memory-barriers-and-volatile
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0211i/Babfdddg.html
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka14041.html
Code of barrier for ARM
http://lxr.linux.no/#linux+v3.4.1/arch/arm/include/asm/barrier.h#L50
http://duartes.org/gustavo/blog/post/anatomy-of-a-program-in-memory
Once virtual addresses are enabled, they apply to all software running in the machine, including the kernel itself.
In Linux, kernel space is constantly present and maps the same physical memory in all processes. Kernel code and data are always addressable, ready to handle interrupts or system calls at any time. By contrast, the mapping for the user-mode portion of the address space changes whenever a process switch happens.
It is also possible to create an anonymous memory mapping that does not correspond to any files, being used instead for program data. In Linux, if you request a large block of memory via malloc(), the C library will create such an anonymous mapping instead of using heap memory.
You can examine the memory areas in a Linux process by reading the file /proc/pid_of_process/maps.
http://duartes.org/gustavo/blog/post/how-the-kernel-manages-your-memory
http://duartes.org/gustavo/blog/post/page-cache-the-affair-between-memory-and-files
http://stackoverflow.com/questions/116343/what-is-the-difference-between-vmalloc-and-kmalloc
http://www.scs.ch/~frey/linux/memorymap.html
Memory Barrier
http://stackoverflow.com/questions/1787450/how-do-i-understand-read-memory-barriers-and-volatile
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0211i/Babfdddg.html
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka14041.html
Code of barrier for ARM
http://lxr.linux.no/#linux+v3.4.1/arch/arm/include/asm/barrier.h#L50
2012/10/03
Linux IO mapped memory access
APIs
For a device driver, the hardware registers access usually involves following kernel API.
ioread & iowirte
There are some drivers for ARM device are using "writel" and "iowrite32" function to access IO mapped memory.
writeX read X are deprecated functions, should use ioreadX iowriteX functions.
There is some interesting story about memory barriers of io mapped memory access.
Seems in 2006, writel and iowrite32 is no-barrier. It is nowadays.
http://lwn.net/Articles/198988/
request_mem_region
About why some code, there is no calling of request_mem_region.
http://stackoverflow.com/questions/7682422/what-does-request-mem-region-actually-do-and-when-it-is-needed
Examples
Some source examples shows how to use io read/write related functions.
For a device driver, the hardware registers access usually involves following kernel API.
- request_mem_region: Tell kernel that the specific range of physical memory are to be used.
- ioremap: Maps the physical memory to kernel virtual memory that can be accessed by kernel.
- ioreadX, iowriteX: X could be 8, 16, 32, parameter is the kernel virtual memory.
- release_mem_region: tell kernel the range of physical memory is not to be used anymore
ioread & iowirte
There are some drivers for ARM device are using "writel" and "iowrite32" function to access IO mapped memory.
writeX read X are deprecated functions, should use ioreadX iowriteX functions.
There is some interesting story about memory barriers of io mapped memory access.
Seems in 2006, writel and iowrite32 is no-barrier. It is nowadays.
http://lwn.net/Articles/198988/
request_mem_region
About why some code, there is no calling of request_mem_region.
http://stackoverflow.com/questions/7682422/what-does-request-mem-region-actually-do-and-when-it-is-needed
Examples
Some source examples shows how to use io read/write related functions.
Linux Atomic Context and Process Context
In below article, concept of atomic context and process context is clearly explained.
http://lwn.net/Articles/274695/
Kernel code generally runs in one of two fundamental contexts. Process context reigns when the kernel is running directly on behalf of a (usually) user-space process; the code which implements system calls is one example. When the kernel is running in process context, it is allowed to go to sleep if necessary. But when the kernel is running in atomic context, things like sleeping are not allowed. Code which handles hardware and software interrupts is one obvious example of atomic context.
...
There is more to it than that, though: any kernel function moves into atomic context the moment it acquires a spinlock. Given the way spinlocks are implemented, going to sleep while holding one would be a fatal error; if some other kernel function tried to acquire the same lock, the system would almost certainly deadlock forever.
...
"Deadlocking forever" tends not to appear on users' wishlists for the kernel, so the kernel developers go out of their way to avoid that situation. To that end, code which is running in atomic context carefully follows a number of rules, including
(1) no access to user space, and, crucially,
(2) no sleeping.
Problems can result, though, when a particular kernel function does not know which context it might be invoked in. The classic example is kmalloc() and friends, which take an explicit argument (GFP_KERNEL or GFP_ATOMIC) specifying whether sleeping is possible or not.
Another article
http://www.itechtalk.com/thread216.html
The kernel accomplishes useful work using a combination of process contexts and interrupt contexts. Kernel code that services system calls issued by user applications runs on behalf of the corresponding application processes and is said to execute in process context. Interrupt handlers, on the other hand, run asynchronously in interrupt context. Processes contexts are not tied to any interrupt context and vice versa.
Kernel code running in process context is preemptible. An interrupt context, however, always runs to completion and is not preemptible. Because of this, there are restrictions on what can be done from interrupt context. Code executing from interrupt context cannot do the following:
1. Go to sleep or relinquish the processor.
2. Acquire a mutex.
3. Perform time-consuming tasks.
4. Access user space virtual memory.
http://lwn.net/Articles/274695/
Kernel code generally runs in one of two fundamental contexts. Process context reigns when the kernel is running directly on behalf of a (usually) user-space process; the code which implements system calls is one example. When the kernel is running in process context, it is allowed to go to sleep if necessary. But when the kernel is running in atomic context, things like sleeping are not allowed. Code which handles hardware and software interrupts is one obvious example of atomic context.
...
There is more to it than that, though: any kernel function moves into atomic context the moment it acquires a spinlock. Given the way spinlocks are implemented, going to sleep while holding one would be a fatal error; if some other kernel function tried to acquire the same lock, the system would almost certainly deadlock forever.
...
"Deadlocking forever" tends not to appear on users' wishlists for the kernel, so the kernel developers go out of their way to avoid that situation. To that end, code which is running in atomic context carefully follows a number of rules, including
(1) no access to user space, and, crucially,
(2) no sleeping.
Problems can result, though, when a particular kernel function does not know which context it might be invoked in. The classic example is kmalloc() and friends, which take an explicit argument (GFP_KERNEL or GFP_ATOMIC) specifying whether sleeping is possible or not.
Another article
http://www.itechtalk.com/thread216.html
The kernel accomplishes useful work using a combination of process contexts and interrupt contexts. Kernel code that services system calls issued by user applications runs on behalf of the corresponding application processes and is said to execute in process context. Interrupt handlers, on the other hand, run asynchronously in interrupt context. Processes contexts are not tied to any interrupt context and vice versa.
Kernel code running in process context is preemptible. An interrupt context, however, always runs to completion and is not preemptible. Because of this, there are restrictions on what can be done from interrupt context. Code executing from interrupt context cannot do the following:
1. Go to sleep or relinquish the processor.
2. Acquire a mutex.
3. Perform time-consuming tasks.
4. Access user space virtual memory.
2012/10/02
Texture Mapping
Below article explains how texture are mapped to 3D object.
http://ex.osaka-kyoiku.ac.jp/~fujii/JREC6/onlinebook_selman/Htmls/3DJava_Ch14.htm
14.2 3D texture coordinates
14.3 Texture and multiple levels of detail
14.4 TextureAttributes
14.5 Using transparent geometry with transparent texture images
14.6 Animated (video) texture mapping
14.7 Summary
The process of applying a bitmap to geometry is called texture mapping and is often a highly effective way of achieving apparent scene complexity while still using a relatively modest number of vertices. By the end of this chapter, you should be able to generate texture coordinates and apply a texture image to your geometry (e.g., figure 14.1).
If you are familiar with the process of texture mapping and texture coordinates, you may want to skim the first few sections and jump straight to the specifics of the Java 3D implementation.
As colors can only be associated with vertices in the model, if texture mapping was not used, a vertex would have to be located at every significant surface color transition. For highly textured surfaces such as wood or stone, this would quickly dominate the positions of the vertices rather than the geometric shape of the object itself. By applying an image to the geometric model, the apparent complexity of the model is increased while preserving the function of vertices for specifying relative geometry within the model.
Modern 3D computer games have used texture mapping extensively for a number of years, and first-person-perspective games such as Quake by Id software immerses the user in a richly texture-mapped world.
Figure 14.1 By applying a bitmap to the geometric model (left), very realistic results can be achieved even with a fairly coarse geometric mesh
Defining coordinate mappings sounds pretty complicated, but in practice it can be as simple as saying the vertex located at position (1,1,1) should use the pixel located at (20,30) in the image named texture.jpg.
Looking at figure 14.2 it should be obvious that the renderer does some pretty clever stuff when it maps a texture onto a geometric model. The texture used was 64 x 64 pixels in size, but when it was rendered, the faces of each cube were about 200 x 200 pixels. So, the renderer had to resize the texture image on the fly to fit the face of each cube. Even tougher, you can see that what started out as a square texture image turned into a parallelogram as perspective and rotation were applied to the cube.
Figure 14.2 A texture-mapped cube (left); the texture image, actual size (middle); and the how the texture image was mapped onto one of the faces of the cube (right)
Figure 14.3 Texture coordinates range from 0.0 to 1.0 with the origin at the bottom left of the texture image. The horizontal dimension is commonly called s and the vertical dimension is called t
You should also be able to see that as the texture has been enlarged it has become pixilated. This is because several eventual screen pixels are all mapped to the same pixel within the texture image. This is a common problem with texture mapping and is visible in texture-mapped games such as Quake, as well.
To discuss the details of mapping between 3D vertex coordinates and texture pixels, some terminology must be introduced. Figure 14.3 illustrates texture coordinates. Instead of mapping to pixel locations directly (which would be relative to the size of the texture image), we use texture coordinates. Texture coordinates range from 0.0 to 1.0 in each dimension, regardless of the size of the image. We know therefore that the coordinates s = 0.5, t = 0.25 are always located halfway across the image and three-quarters of the way down from the top of the image. Note that the origin of the texture coordinate system is at the bottom left of the image, in contrast to many windowing systems that define the origin at the top left.
A pixel within an image that is used for texture mapping is often referred to as a texel.
There are essentially two types of texture mapping, static and dynamic. Defining a static mapping is the most commonly used and easiest form of texture mapping and is the subject of section 14.1.1.
Vertex 143 has been assigned a number of attributes: coordinate (position), color, normal vector, and a texture coordinate.
The
Figure 14.4 The
Figure 14.5 The
http://ex.osaka-kyoiku.ac.jp/~fujii/JREC6/onlinebook_selman/Htmls/3DJava_Ch14.htm
Using texture images
14.1 Introduction14.2 3D texture coordinates
14.3 Texture and multiple levels of detail
14.4 TextureAttributes
14.5 Using transparent geometry with transparent texture images
14.6 Animated (video) texture mapping
14.7 Summary
The process of applying a bitmap to geometry is called texture mapping and is often a highly effective way of achieving apparent scene complexity while still using a relatively modest number of vertices. By the end of this chapter, you should be able to generate texture coordinates and apply a texture image to your geometry (e.g., figure 14.1).
If you are familiar with the process of texture mapping and texture coordinates, you may want to skim the first few sections and jump straight to the specifics of the Java 3D implementation.
As colors can only be associated with vertices in the model, if texture mapping was not used, a vertex would have to be located at every significant surface color transition. For highly textured surfaces such as wood or stone, this would quickly dominate the positions of the vertices rather than the geometric shape of the object itself. By applying an image to the geometric model, the apparent complexity of the model is increased while preserving the function of vertices for specifying relative geometry within the model.
Modern 3D computer games have used texture mapping extensively for a number of years, and first-person-perspective games such as Quake by Id software immerses the user in a richly texture-mapped world.
Figure 14.1 By applying a bitmap to the geometric model (left), very realistic results can be achieved even with a fairly coarse geometric mesh
14.1 Introduction
Texture mapping is exactly what it says. As an application developer, you are defining a mapping from 3D coordinates into texture coordinates. Usually this equates to defining a coordinate mapping to go from a vertex’s 3D coordinates to a 2D pixel location within an image.Defining coordinate mappings sounds pretty complicated, but in practice it can be as simple as saying the vertex located at position (1,1,1) should use the pixel located at (20,30) in the image named texture.jpg.
Looking at figure 14.2 it should be obvious that the renderer does some pretty clever stuff when it maps a texture onto a geometric model. The texture used was 64 x 64 pixels in size, but when it was rendered, the faces of each cube were about 200 x 200 pixels. So, the renderer had to resize the texture image on the fly to fit the face of each cube. Even tougher, you can see that what started out as a square texture image turned into a parallelogram as perspective and rotation were applied to the cube.
Figure 14.2 A texture-mapped cube (left); the texture image, actual size (middle); and the how the texture image was mapped onto one of the faces of the cube (right)
Figure 14.3 Texture coordinates range from 0.0 to 1.0 with the origin at the bottom left of the texture image. The horizontal dimension is commonly called s and the vertical dimension is called t
You should also be able to see that as the texture has been enlarged it has become pixilated. This is because several eventual screen pixels are all mapped to the same pixel within the texture image. This is a common problem with texture mapping and is visible in texture-mapped games such as Quake, as well.
To discuss the details of mapping between 3D vertex coordinates and texture pixels, some terminology must be introduced. Figure 14.3 illustrates texture coordinates. Instead of mapping to pixel locations directly (which would be relative to the size of the texture image), we use texture coordinates. Texture coordinates range from 0.0 to 1.0 in each dimension, regardless of the size of the image. We know therefore that the coordinates s = 0.5, t = 0.25 are always located halfway across the image and three-quarters of the way down from the top of the image. Note that the origin of the texture coordinate system is at the bottom left of the image, in contrast to many windowing systems that define the origin at the top left.
A pixel within an image that is used for texture mapping is often referred to as a texel.
There are essentially two types of texture mapping, static and dynamic. Defining a static mapping is the most commonly used and easiest form of texture mapping and is the subject of section 14.1.1.
14.1.1 Static mapping using per-vertex texture coordinates
Static mapping defines a static relationship between vertex coordinates and texture coordinates. This is usually implemented by simply assigning a texture coordinate to each vertex in the model (table 14.1).Vertex 143: |
---|
coordinate: 3,–6,7 |
color: red = 184, green = 242, blue = 32 |
normal vector 0.5, 0.2, -0.3 |
texture coordinate: 0.3, 0.6 |
Vertex 143 has been assigned a number of attributes: coordinate (position), color, normal vector, and a texture coordinate.
The
TextureTest
example that follows can be used to experiment with the relationship among images, texture coordinates, and 3D vertex coordinates (figure 14.4).TextureTest
loads the following information from a simple ASCII text file:- Name of texture image
- Size of geometry in the x direction
- Geometry y scaling factor
- Number of vertices
- Texture coordinates for Vertex 1
- Texture coordinates for Vertex 2
- Texture coordinates for Vertex N
Width 400 Height 400 | ||||||
Vertex | x | y | x' | y' | tx | ty |
---|---|---|---|---|---|---|
0 | 159 | 99 | 159 | 301 | 0.40 | 0.75 |
1 | 125 | 126 | 125 | 274 | 0.31 | 0.69 |
2 | 110 | 163 | 110 | 237 | 0.28 | 0.59 |
3 | 102 | 243 | 102 | 157 | 0.26 | 0.39 |
4 | 118 | 304 | 118 | 96 | 0.30 | 0.24 |
5 | 179 | 363 | 179 | 37 | 0.45 | 0.09 |
6 | 220 | 364 | 220 | 36 | 0.55 | 0.09 |
7 | 264 | 335 | 264 | 65 | 0.66 | 0.16 |
8 | 287 | 289 | 287 | 111 | 0.72 | 0.28 |
9 | 295 | 204 | 295 | 196 | 0.74 | 0.49 |
10 | 279 | 132 | 279 | 268 | 0.70 | 0.67 |
11 | 253 | 104 | 253 | 296 | 0.63 | 0.74 |
12 | 207 | 95 | 207 | 305 | 0.52 | 0.76 |
Figure 14.4 The
TextureTest
example loads an image and a list of texture coordinates and displays a portion of the image in a 3D scene by texture mapping it onto a TriangleArray
- The x, y columns are the pixel locations in the image that are returned by a bitmap editor. The origin for these 2D coordinates is at the top-left of the image. The x' and y' coordinates compensate for this by flipping the ycoordinate (y' = height – y). The texture coordinates tx and ty are suitable for Java 3D (tx = x'/width and ty = y'/height). It is very easy to perform the coordinate conversions using a spreadsheet.
- The ASCII file is therefore:
daniel.gif (name of the image file) 5 (size in the x direction) 1.0 (y scale factor) 13 (number of texture coordinates) 0.40 0.75 (texture coordinate 1, x y) 0.31 0.69 0.28 0.59 0.26 0.39 0.30 0.24 0.45 0.09 0.55 0.09 0.66 0.16 0.72 0.28 0.74 0.49 0.70 0.67 0.63 0.74 0.52 0.76 (texture coordinate 13, x y)
TextureTest
example contains the formulae necessary for the coordinate transformation (figure 14.5).Figure 14.5 The
TextureTest
example in action. Four texture-mapped TriangleArray
s have been created from two sets of texture coordinate data and images. The TriangleArray
s are rotated using an Interpolator
IMPORTANT | The texture coordinates are specified in counterclockwise order. This is a requirement imposed by the com.sun.j3d.utils.geometry.Triangulator utility, which converts the polygon created from the texture coordinates into a TriangleArray. |
Subscribe to:
Posts (Atom)
Post Code on Blogger
Simplest way to post code to blogger for me: <pre style="background: #f0f0f0; border: 1px dashed #CCCCCC; color: black;overflow-x:...
-
Explain There is not interrupt PIN for PCIe interrupt. When device wants to raise an interrupt, an interrupt message is sent to host via ...
-
Configure Space Addressing One of the major improvements the PCI Local Bus had over other I/O architectures was its configuration mechanism...
-
What is LMA and VMA Every loadable or allocatable output section has two addresses. The first is the VMA, or virtual memory address. This ...