Transcript 736.ppt
Outline
Motivation and background
Read
Read “seek” time
Good sequential read
Write
Write log block (buffer)
“1” !=”0”?
Conclusion and future work
Motivation
Era for Flash?
Nice properties:
Fast random access; Shock / Temperature
resistance ; Smaller size and weight.
Energy saving.
Reducing price + increasing volume.
We only get BLACK BOX.
Industry company may not tell?
No systematic performance study literature?
Flash Memory
Page 0
Organization
Each page: 2KB,4KB
Each block: 64,128 pages
Page 1
Page 2
Block 0
Block 1
Page 3 ...... Block n-1
......
Page m-1
Chip organization
Read and Write
Read in pages ~50us
Write in pages ~800us
Any write needs block
erase ~1500us
Outline
Motivation and background
Read
Read “seek” time.
Good sequential read.
Write
Write buffer size.
“1” !=”0”?
Conclusion and future work
Read
Assumptions and basic knowledge
Uniform random read time?
Good sequential read performance?
Experiments Setup
Fedora Core 7.0,1GHZ CPU, 1G Memory
Flash memory
(I) Kingston DataTraveler 1G
(II) PNY Attaché 2G
(III) Unknown 1G
Random Read-- “seek” time
Sequential Read –good!
Sequential Read –Scale up
Read –what we can do?
Technology aware FS
Block group VS Cylinder group
To Group files.
Random read is good but not perfect.
To decrement random accesses.
Outline
Motivation and background
Read
Read “seek” time
Good sequential read
Write
Write log block (buffer)
“1” !=”0”?
Conclusion
Write
Assumptions and basic knowledge
Bad random write performance
Needs block erase (1 page--> block erase)
Good sequential write performance
Limited block erase times (64 pages--> block erase)
Reason : Log (buffer) and Merge
Write -- merge
1.Write
valid
valid
valid
2.merge
2.merge
valid
valid
valid
valid
valid
Data Block
Free Data Block
Log Block
Log Block Pool
3.erase
3.erase
Random Write -- bad!
Continuous write – great relief
from erase
Write -- log block
What is it?
Flash block as write buffer
Correspond to one flash block at one time
Why is it used?
Hard disk : clustering writes; save redundant
Flash disk: reduce erase times
Interesting: Log block size and usage
Log block -- size
Motivation
Trade off: Large merge time VS frequent merge
Size: 64 pages
128 pages
Determine size of Log block pool
Repeat writing one page into set of continuous
flash blocks sequentially. Check the time cost.
Block #
Time(us)
Block # Time(us)
1
120161
1 120500
2
120231
2 120109
3
118823
3 120575
1
2735
4 119316
2
2691
1 119152
3
2708
2 119743
1
4555
3 119708
2
1805
4 118452
3
2942
1 118827
2 120796
3 121815
4 120943
Log block pool -- Use Strategy
Log block pool use FIFO to reclaim used blocks
Repeatedly writing less than 16 pages into one
flash block does not trigger data merge.
“0” != “1”
How about 50%?
Write -- what we can do?
New file system for flash
Special policy for frequently updated data, e.g
inodes
Anticipatory scheduling
Modified LFS Log block !=write buffer
More flexible. Directly execute any write request in
one data block associated with log block.
Flip “1” to “0”, “0” to “1” may save time
(attributed to Remzi)
Conclusion
Comprehensive study of the read and write
performance of flash memory
Design a relatively systematic method to study
the flash memory as a black box
Find some interesting and potentially useful
properties, e.g. “1”!=”0” ; “seek” time
Apply similar performance study strategy to
SSD and check whether the properties still hold
Q&A
Random Read-- “seek” time
Random Read-- “seek” time