Satu Tahun Pasca Tribencana Tohoku - Inovasi Online

Transcription

Satu Tahun Pasca Tribencana Tohoku - Inovasi Online
Vol. 20 No.1 (April 2012)
Majalah Ilmiah Persatuan Pelajar Indonesia Jepang
io.ppijepang.org
Kecerdasan Solusi hidup
Satu Tahun Pasca
‘Tribencana
Tohoku
Topik Utama:
Artikel Lingkungan:
‘
Artikel Riset:
Profil Kebijakan
Municipal Solid Waste Treatment
Pemenang Second
Energi Pasca
Using Hydrothermal Process
Tokyo Tech Indonesian
Fukushima
to Produce a Renewable Energy
Commitment Award (TICA
Source
2) Cluster 2 - ElectricalElectronic and Information
Technology.
PPI JEPANG
ISSN 2085-871X
Dewan Redaksi
Pemimpin Redaksi
Arief Yudhanto (Tokyo Metropolitan University)
Departemen/Staf Redaksi
Artikel Utama: Muhammad Ery Wijaya (Kyoto University), Artikel Riset: Nirmala Hailinawati (Tokyo Institute
of Technology), Berita dan Opini: Pandji Prawisudha (Tokyo Institute of Technology), Kilas Riset: Cahyo
Budiman (Okinawa Institute of Science and Technology/ Institut Pertanian Bogor) Living: Retno Ninggalih
(Sendai, Tohoku), Resensi Buku dan Film: M. Ridlo E. Nasution (Tokyo Metropolitan University), Desain
Grafis dan Foto: Banung Grahita (Tokyo Metropolitan University/Institut Teknologi Bandung), Wawancara:
Gerald Ensang Timuda (Lembaga Ilmu Pengetahuan Indonesia), Admin Situs: Bayu Indrawan (Tokyo Institute of
Technology), Pandji Prawisudha (Tokyo Institute of Technology)
Situs: io.ppijepang.org
E-mail: [email protected]
Kontributor
Foto sampul: Anzilni Fathia Amasya
Contributors wanted!
Majalah Inovasi bertransformasi menjadi majalah populer untuk mengakomodasi pembaca dan contributor
yang lebih luas.
Kami menunggu tulisan dan foto anda!
1)
2)
3)
4)
5)
6)
7)
8)
9)
Foto esai di Jepang dan dunia (maksimum dua foto, dan esai sangat pendek 100 kata)
Artikel populer (masuk ke dalam laporan utama; jumlah kata untuk artikel ini dibatasi 600 - 2000 kata; bahasa
sederhana yang dapat dipahami siswa SMA; catatan kaki dan referensi maksimum 10 buah)
Artikel riset yang bersifat semi popular (isinya serius, sekelas paper; catatan kaki dan referensi tidak lebih dari
20; jumlah kata untuk artikel ini dibatasi 2000-4000 kata)
Berita tentang kerjasama Jepang-Indonesia; tentang kunjungan resmi kontingen Indonesia ke Jepang; berita
tentang prestasi warga Indonesia di Jepang (200-300 kata, plus foto jika ada)
Opini tentang isu-isu terhangat di Jepang dan Indonesia, tentang Indonesia, tentang Jepang, dan lain-lain (200300 kata, plus foto jika ada). Misal: opini tentang kebijakan karangan ilmiah yang dikeluarkan DIKTI.
Kilas riset. Silakan mengirimkan abstrak dari artikel yang telah anda publikasikan di jurnal ilmiah atau
proceeding. Sertakan satu gambar representatif.
Resensi buku dan film. Setelah membaca buku baru atau menonton film, silakan berbagi di rubrik ini. Foto
sampul buku dan ulasan pendek 100 – 300 kata.
Wawancara. Anda dapat mengirimkan hasil wawancara informal anda dengan tokoh dari Indonesia tentang
Jepang, sains, teknologi, seni dan lainnya.
Living. Jika anda punya tips hidup hemat, belanja murah, tips sukses berkuliah dan bekerja di Jepang di sinilah
tempatnya! Berbagilah dalam 300 kata.
Kirimkan tulisan atau foto ke email berikut: [email protected]
Isi
Isi
1
Editori@l
2
Topik Utama
3
Profil Kebijakan Energi Pasca Tragedi Fukushima
Sidik Permana
4
Meninjau Kembali PLTN dan Kebijakan Energi di Jepang
Muhammad Kunta Biddinika, Muhammad Aziz
12
Artikel Lingkungan
Municipal Solid Waste Treatment Using Hydrothermal Process to Produce a Renewable
Energy
Pandji Prawisudha, Srikandi Novianti
15
Artikel Riset: Artikel Pemenang TICA 2011 (Cluster II)
Development of Indonesian
Automated Document Reader:
Evaluation of Text Segmentation Algorithms
Teresa Vania Tjahja, Anto Satriyo Nugroho
23
Optimized Turbo Code VLSI Architecture for LTE using System Level Modeling and
Assertion Based Verification
Ardimas Andi Purwita, Trio Adiono
28
Evaluation of Fingerprint Orientation Field Correction Methods
A. A. K. Surya, A. S. Nugroho
35
Algorithm and Architecture Design of Soft Decision Demapper for SISO DVB-T by
Using Quadrant K-Best
Probo Aditya N.I., Trio Adiono
40
Application of Adaptive Neuro Fuzzy Inference System (ANFIS) for Lung Cancer
Detection Software
Sungging Haryo W., Sylvia Ayu P., M. Yusuf Santoso, Syamsul Arifin
45
Vol 19 No.4
1
EDITORIAL
Tsunami menghempas
perumahan di Natori,
Prefektur Miyagi. (Foto:
Associated Press)
Dewan redaksi menyampaikan belasungkawa sedalam-dalamnya kepada
keluarga korban gempa dan tsunami yang terjadi di Tohoku pada 11
Maret 2011. Kami berharap segenap warga dan keluarga yang terkena
musibah terus diberi kekuatan dan kesabaran.
***
Majalah INOVASI kali ini mengetengahkan
topik utama ‘Satu Tahun Pasca Tribencana Tohoku‘.
Tribencana (gempa-tsunami-ledakan reaktor
nuklir) diawali dengan gempa yang berpusat sejauh
130 kilometer dari Sendai, Prefektur Miyagi. Gempa
berkekuatan magnitudo 9 (tingkat guncangan 7 shindo)
ini terjadi pada 11 Maret 2011, pukul 14.46. Gempa ini
merupakan gempa terbesar dalam sejarah Jepang. Bumi
pun terganggu: porosnya bergeser 10 hingga 25 cm
sehingga lamanya siang dan orbit bumi sedikit berubah.
Sistem peringatan tsunami Jepang dengan sigap
bekerja. Sekitar seribu sensor menerima sinyal bahwa
tsunami besar akan terjadi. Japan Meteorological Agency
segera meneruskan sinyal ini, dan semua penduduk
diharuskan segera menjauhi pantai.
Sayangnya, tidak semua orang selamat dari bencana
tsunami sore itu. Tsunami itu merenggut 15.854 nyawa.
Belum lagi 3.203 orang dinyatakan hilang, dan 26.992
2
Vol 19 No.4
orang mengalami luka-luka. Sekitar satu juta rumah
hancur, dan lebih dari 500 ribu orang kehilangan tempat
tinggal. Empat ribu jalan raya, 78 jembatan dan 29 rel
kereta mengalami kerusakan. Limbah dan sampah yang
berceceran ditaksir mencapai 25 ton.
Tidak hanya itu, tsunami juga melumpuhkan fasilitas
pembangkit listrik tenaga nuklir (PLTN), khususnya yang
berlokasi di pantai timur, yaitu PLTN Fukushima Daiichi. Reaktor nuklir secara otomatis berhenti beroperasi
(shut down) ketika diguncang gempa besar. Reaktor
yang masih panas dengan cepat didinginkan oleh sistem
pendingin. Sayangnya, sistem pendingin reaktor di PLTN
milik TEPCO itu rusak dihempas tsunami, sehingga gagal
menurunkan temperatur reaktor. Reaktor pun meleleh dan
letupan terjadi. Radiasi nuklir pun terdeteksi di lingkungan
sekitar reaktor. Penduduk panik, dan pemerintah segera
bertindak. Mereka yang tinggal di radius 30 kilometer
dari reaktor harus menyingkir sejauh-jauhnya.
EDITORIAL
Dampak dari kompleksitas tiga bencana gempa,
tsunami dan nuklir ini sangat besar, tidak hanya dari sisi
korban jiwa dan materi, namun juga dari sisi kemajuan
teknologi.
Jepang menon-aktifkan satu per satu reaktor
nuklirnya yang berjumlah lebih dari 50. Reaktor itu harus
menjalani pemeriksaan keselamatan. Reaktor terakhir yang
dipadamkan adalah Unit 3 PLTN Tomari di Hokkaido.
Pada tanggal 5 Mei 2012 Jepang dapat dikatakan ‘bebas‘
tenaga nuklir.
Efek tragedi reaktor nuklir ini tidak hanya dialami
Jepang. Segera setelah bencana tersebut, sejumlah negara
mengkaji ulang atau membatalkan program pembangkitan
nuklir mereka.
Secara keseluruhan, tribencana ini membuat
Jepang mengalami kerugian sekitar 16-25 triliun yen.
Jepang diselimuti duka yang mendalam.
Satu tahun berlalu, penduduk Fukushima, Miyagi
dan Iwate terus melakukan perbaikan kawasan. Dua
ratus kilometer dari sana, tepatnya di kawasan Chiyodaku, Tokyo, seribu orang lebih memenuhi Teater Negara.
Perdana Menteri Yoshihiko Noda dan Raja Jepang Akihito
melakukan doa bersama. Tepat pukul 14:46 keadaan
menjadi senyap. Jepang mengheningkan cipta selama 60
detik.
Di balik peringatan itu, sejumlah pertanyaan tersisa.
Bagaimana masa depan penggunaan dan pengembangan
energi nuklir di Jepang? Masihkah 30% energinya tetap
bersumber dari nuklir?
Karena itu, sebagai media informasi ilmiah yang
berbasis di Jepang, majalah INOVASI edisi kali ini
menampilkan sejumlah artikel mengenai pengembangan
nuklir Jepang setelah tribencana. INOVASI memuat
tulisan Dr. Sidik Permana, peneliti Institute for Science
and Technology Studies yang saat bencana sedang berada
di Prefektur Ibaraki yang bersebelahan dengan Fukushima.
Beliau memaparkan tentang kebijakan nuklir Jepang
pascatragedi Fukushima dalam tulisan berjudul “Profil dan
Kebijakan Nuklir Pasca Tragedi Fukushima”. INOVASI
juga mengundang Muhammad Kunta Biddinika (Crisis
Center, Kedutaan Besar Republik Indonesia Tokyo) dan
pakar kebijakan energi Dr. Muhammad Aziz untuk
memberikan perspektif. Tulisan mereka yang berjudul
“Meninjau Kembali PLTN dan Kebijakan Energi di
Jepang“ memberikan data-data tentang PLTN, kebijakan
energi dan pandangan warga Jepang tentang nuklir.
Dua tulisan utama ini diharapkan dapat memberikan
pencerahan tentang kebijakan nuklir Jepang kepada para
pembaca.
Masih terkait dengan energi, INOVASI juga
mengetengahkan tulisan tentang energi terbarukan yang
dibangkitkan dari sampah kota. Sampah yang biasanya
diabaikan karena polutif ternyata dapat diolah secara
efisien dan self-sustain dengan metode hidrotermal yang
diusulkan Dr. Pandji Prawisudha dan Srikandi Novianti.
Artikel berjudul “Municipal Solid Waste Treatment Using
Hydrothermal Process to Produce a Renewable Energy
Source” menjabarkan teknologi konversi sampah kota
menjadi bahan bakar setara batubara dan perbandingannya
dengan teknologi pengolahan sampah lain.
Selain itu INOVASI juga masih mengetengahkan
artikel ilmiah pemenang dan finalis Tokyo Tech Indonesian
Commitment Award (TICA) 2011 yang merupakan hasil
riset mahasiswa/i di Indonesia. Nomor sebelumnya (Vol
19 No 3 Desember 2011) telah memuat Cluster-1 yaitu
Business, Social Science and Urban Planning. Dalam
nomor ini (Vol 20 No 1 April 2012) kami memuat lima
artikel (tiga pemenang dan dua finalis) dari Cluster-2
yaitu Electronic-Electrical and Information Technology.
Anda dapat menantikan artikel dari Cluster-3 (Applied
Science and Technology) dalam nomor mendatang. Tak ada gading yang tak retak. Tak ada karya manusia
yang sempurna. Redaksi INOVASI mengharapkan
komentar, saran dan informasi dari pembaca setia. Anda
dapat mengirimkan tulisan ke alamat surel kami: editor.
[email protected].
Dan, selamat membaca!
Salam hangat,
Editor
Vol 19 No.4
3
Topik Utama
Profil dan Kebijakan Nuklir
Pasca Tragedi
Fukushima
Sidik Permana
Bandung Institute of Technology,
Nuclear Physics and Biophysics Research Division
Institute for Science and Technology Studies
Indonesian Nuclear Network
E-mail: [email protected]
4
Vol 19 No.4
Topik Utama
G
empa terbesar dalam sejarah Jepang yang terjadi pada hari Jumat tanggal 11 Maret 2011 tepat pukul
14.46 dengan kekuatan 9 magnitude disusul oleh gempa besar dan kecil susulan, dan tidak lama
setelah gempa besar tersebut bencana tsunami terjadi. Peristiwa bencana alam tersebut diawali
dengan bencana gempa sebagai pemicunya, kemudian diikuti oleh naiknya gelombang laut sehingga terjadi
tsunami, dan pada akhirnya berefek pada hancurnya fasilitas Pembangkit Listrik Tenaga Nuklir (PLTN)
khususnya fasilitas pendinginan reaktor di PLTN Fukushima Daiichi, yang mengakibatkan terjadinya peristiwa
penyebaran radiasi nuklir.
Berkaitan dengan energi
nuklir, Jepang menyadari bahwa
potensi sumber daya alam yang
terbatas dan akibat peristiwa
krisis minyak dunia tahun 70an, mencoba memulai kebijakan
diversifikasi energi serta keamanan
dan kemandirian energi dengan
penggunaan energi nuklir dan bahan
bakar gas dalam rangka mengurangi
kebergantungan terhadap minyak
akibat pengurangan konsumsi
batubara. Pilihan kebijakan nuklir
sendiri diikuti dengan penguatan
infrastruktur industri, riset dan
pengembangan serta pendidikan.
Dari hasil proyek turnkey
PLTN Amerika Serikat yang
dibangun di Jepang, Jepang berhasil
mempelajari dan mengembangkan
sendiri teknologi nuklirnya dengan
menaikkan tingkat keamanan PLTNnya menyesuaikan dengan tuntutan
lingkungan alam sekitar, baik
teknologi bangunan tahan gempa
maupun penahan tsunami. Level
keselamatan standar PLTN pun
ditingkatkan sehingga model reaktor
generasi selanjutnya, terutama
Generasi ke-3, telah mengalami
kemajuan yang cukup signifikan dari
segi keselamatan. PLTN Fukushima
Daiichi merupakan kompleks reaktor
Jepang yang cukup tua, di mana
Unit 1 sudah 40 tahun beroperasi
dan beberapa lainnya mendekati
umur yang sama. Generasi reaktor
Fukushima ini termasuk Generasi
ke-2 yang tentunya sudah memenuhi
segi keselamatan reaktor baik
saat beroperasi maupun pada saat
darurat.
Peristiwa kejadian bencana
gempa dan tsunami ini memberikan
sebuah pelajaran berharga, bahwa
hilangnya semua sumber listrik
darurat reaktor (blackout station)
menjadi peristiwa yang fatal yang
berakibat pada kegagalan pendinginan
sisa panas akibat peluruhan bahan
bakar dan produk fisi yang ada di
reaktor. Krisis nuklir Fukushima ini
berbeda dengan krisis Three Mile
Island (TMI) yang terjadi di Amerika
Serikat pada tahun 1978 dan juga
berbeda dengan peristiwa Chernobyl
baik dari segi proses, pemacu dan
radiasi yang dipancarkan, meskipun
dari segi level International Nuclear
and Radiological Event Scale (INES)
mempunyai tingkat yang sama.1
Status PLTN di Jepang
Sebelum dan Pasca
Tragedi Fukushima
Bencana gempa bumi dan tsunami
pada 11 Maret 2011 memaksa
Vol 19 No.4
5
Topik Utama
Status of the Nuclear Power Plants after the Earthquake
Gambar 1. Status PLTN di Jepang setelah terjadinya bencana gempa dan tsunami 5
PLTN berhenti beroperasi secara
otomatis. Hal yang sama juga terjadi
pada pembangkit listrik lainnya
di saat bersamaan, di antaranya
di Pembangkit Listrik Tenaga Air
(PLTA) (20 unit) dan pembangkit
listrik bahan bakar fosil (15 unit).
Bencana tersebut juga melumpuhkan
jaringan dan fasilitas transmisi
serta distribusi pasokan listrik ke
pelanggan dari pembangkit di Jepang
bagian Timur dan Utara pulau
Honshu. 2
Sebelum terjadinya bencana
gempa dan tsunami, Jepang
mengoperasikan sekitar 54 unit
PLTN atau sekitar 31% pasokan
listrik nasional. 3 Akibat dari bencana
6
Vol 19 No.4
alam tersebut, sekitar 14 unit
PLTN terkena dampak gempa dan
tsunami; 3 unit di PLTN Onagawa,
4 unit di Fukushima Dai-ni, 1
unit di Tokai Dai-ni, dan 6 unit di
Fukushima Dai-ichi (lihat Gambar
1). Sementara sisanya sekitar 40
unit reaktor tetap beroperasi karena
tidak terkena dampak langsung
dari bencana alam tersebut.4 Akibat
tidak beroperasinya beberapa unit
PLTN tersebut dan beberapa unit
PLTA serta pembangkit berbahan
bakar fosil, menyebabkan turunnya
pasokan listrik hingga 20% dari total
produksi listrik Tokyo Electric Power
Company (TEPCO) dan Tohoku
Electric Power Company (Tohoku
Topik Utama
Current Status of the Nuclear Power Plants in Japan
(as of March 26, 2012)
Gambar 2 . Status PLTN di Jepang terbaru setelah terjadinya bencana gempa dan tsunami 6
Electric), atau setara dengan 10 – 15
Gigawatt milik TEPCO dan 2 – 3
Gigawatt milik Tohoku Electric.3
Seperti terlihat pada Gambar
2 mengenai status terbaru PLTN
di Jepang pada akhir bulan Maret
2012, hanya satu unit PLTN yang
beroperasi yaitu Unit 3 PLTN Tomari
di Hokkaido yang nantinya juga akan
dinonaktifkan (shutdown) karena
memasuki masa pengecekan berkala
bulan Mei 2012. Sementara 17 unit
tidak beroperasi terdiri dari 14 unit
yang terkena dampak langsung
gempa dan tsunami , dan 3 unit di
PLTN Hamaoka dinonaktifkan atas
permintaan langsung pemerintah.
Sebanyak 36 unit PLTN lainnya juga
tidak beroperasi dengan alasan telah
memasuki masa pengecekan berkala
dan lainnya ( JAIF, 2012).6
Kebijakan pemerintah Jepang
untuk melakukan uji ketahanan
(stress test) pada semua PLTN yang
ada, memaksa hampir semua unit
dinonaktifkan secara bersamaan atau
saat mendekati pengecekan berkala
masing-masing unit. Semua pengelola
(operator)
PLTN
diwajibkan
mengajukan aplikasi pelaksanaan uji
ketahanan kepada badan pengawas
(regulator) PLTN agar konfirmasi
nilai keselamatan dapat dipenuhi
berdasarkan tambahan prosedur
yang perlu diikuti menyusul tragedi
Fukushima.6 Dari 18 unit PLTN
yang baru-baru ini mengajukan
pengecekan uji ketahanan, dua
unit diantaranya sudah mendapat
verifikasi dari badan pengawas dan
telah diterima oleh pemerintah pusat
untuk bisa kembali dioperasikan
guna diajukan ke pemerintah daerah
untuk mendapatkan penerimaan
dari masyarakat terkait dioperasikan
kembali PLTN tersebut, setelah
menjalani serangkaian pemeriksaan
dan uji keselamatan reaktor. 7
Dampak yang paling terasa
dengan tidak dioperasikannya
PLTN adalah pengurangan pasokan
listrik bagi industri, selain tentunya
pengurangan pasokan kepada sektor
rumah tangga dan komersial. Pada
Vol 19 No.4
7
Topik Utama
sektor industri, banyak industri yang
berhenti beroperasi akibat tidak
adanya pasokan listrik. Asosiasi
Pengusaha Baja Jepang dan Pelaku
Industri dan Utilitas Kelistrikan
Jepang telah meminta pemerintah
jepang untuk segera mengoperasikan
kembali reaktor nuklir yang sudah
menjalani serangkaian uji keselamatan
untuk memenuhi kebutuhan listrik
industri mereka, dan asosiasi tersebut
kembali meminta pemerintah untuk
segera menambah kapasitas nuklir
Jepang untuk memenuhi kebutuhan
industri ke depan.8
Sementara
masyarakat
di kalangan rumah tangga dan
komersial juga dituntut untuk
melakukan pengurangan pemakaian
listrik dan melakukan efisiensi.
Perusahaan dan industri di Jepang
terpaksa
menutup
sebagian
pabriknya dan membeli beberapa
komponen produksi dari luar yang
pada akhirnya meningkatkan harga
8
Vol 19 No.4
produk mereka. Di sisi lain, karena
pasokan energi tidak ada, pembangkit
berbahan bakar fosil yang sudah tua
dioperasikan kembali dan impor gas
juga ditingkatkan untuk memenuhi
kebutuhan energi dalam negeri
Jepang.
Dari sisi produsen listrik,
karena biaya produksi listrik
yang meningkat, secara perlahan
produsen listrik seperti TEPCO
telah menaikkan harga listrik sekitar
17% dalam rangka menyesuaikan
biaya produksi tersebut, yang
tentunya membuat konsumen
harus menambah pengeluaran
bagi pemakaian listrik. Di sisi lain
meningkatnya konsumsi bahan
bakar fosil mengakibatkan emisi
gas CO2 mengalami kenaikan, yang
berakibat pada target pencapaian
Protokol Kyoto untuk pengurangan
emisi gas CO2 terhambat dan
mengalami kemunduran. Selain itu,
muncul permasalahan yang disebut
”genpatsu money” yaitu uang PLTN
bagi penduduk dan pemerintah lokal
di tingkat kelurahan atau desa, berupa
subsidi atau dana tanggungan bagi
penduduk atau pemerintah daerah
yang berada di lokasi fasilitas nuklir
(salah satunya PLTN). Uang tersebut
sebelumnya diberikan kepada
masyarakat untuk membangun
fasilitas daerah dan digunakan
untuk meningkatkan kesejahteraan
masyarakat yang tentunya menjadi
pendapatan asli daerah.
Dampak lain juga muncul
ketika industri nuklir tidak
beroperasi, yakni industri turunan
atau industri terkait nuklir yang telah
menyerap tenaga kerja lokal akan ikut
tidak beroperasi. Hal ini berdampak
langsung pada mata pencaharian
penduduk setempat yang tentunya
telah puluhan tahun merasakan
manfaat dan kesejahteraan dengan
adanya fasilitas PLTN (Liputan NHK
Maret 2012, Setahun Fukushima).
Program Nuklir Jepang
Pasca Fukushima
Seperti diinformasikan pada bagian
sebelumnya, Jepang saat ini telah
menerapkan standar pengecekan
keselamatan
tambahan
untuk
memastikan bahwa semua unit
PLTN bisa dijamin keselamatannya
terutama terhadap gempa dan
tsunami, salah satunya dengan
menerapkan uji ketahanan. Pasca
Tragedi Fukushima, pemerintah
Jepang saat ini sedang berupaya
menyusun kebijakan baru dalam
rangka merevisi kebijakan jangka
panjang energi nasional mereka. Salah
satu poin penting dalam kebijakan
itu adalah berupaya menyusun
kebijakan yang tidak lagi bergantung
Topik Utama
kepada energi nuklir sebagai salah
satu pilar penting pembangunan dan
kemajuan Jepang yang telah dirintis
sejak tahun 1960-an hingga saat ini.
Kebijakan jangka pendek dan
menengah Jepang diprediksi masih
menggunakan energi nuklir dengan
beberapa opsi, di antaranya tetap pada
porsi yang ada saat sebelum tragedi
Fukushima dengan meningkatkan
level keselamatannya terutama dari
tsunami, dan opsi menurunkan
porsi nuklir secara bertahap. Dalam
program jangka panjang untuk
mengurangi
kebergantungan
terhadap nuklir, Jepang mencari
formula terbaik untuk pasokan
energinya dengan meningkatkan
porsi energi terbarukan. Kebijakan ini
dapat merevisi kebijakan terdahulu
di mana Jepang akan menaikkan
kontribusi energi nuklir di dalam
energy mix mereka dari angka 30%
(sebelum Tragedi Fukushima)
menjadi 50% pada 2050. Kaji ulang
kebijakan energi Jepang ini masih
dalam tahap penyusunan dan masih
akan ada perubahan bergantung
kondisi dan hasil penyelidikan tim
pakar yang ada.
Program jangka pendek
dan menengah adalah program
dekontaminasi radiasi di sekitar PLTN
dan khususnya di daerah evakuasi
sehingga penduduk bisa kembali
ke daerahnya semula, tentu setelah
reaktor stabil dan tingkat radiasi
turun hingga tingkat aman. Tentunya
semua program di atas beriringan
dengan program pembangunan
kembali daerah-daerah yang terkena
gempa dan tsunami, yang luasan
daerah dan besaran kerusakannya
lebih daripada bencana akibat
kebocoran nuklir. Dalam satu
tahun ini semua unit reaktor 1-3
Fukushima Daiichi telah mencapai
kestabilan dan diupayakan untuk
segera memasuki tahap selanjutnya,
yaitu mengurangi tingkat radiasi
yang keluar dan mengeluarkan serta
memindahkan bagian-bagian reaktor
untuk segera di-dekomisioning.
Dengan hanya mengoperasikan
satu unit reaktor saja dari 54 unit
PLTN, Jepang akan mengalami defisit
energi khususnya saat menghadapi
musim panas yang akan datang
yaitu sekitar 20%. Satu unit tersebut
dimiliki oleh Hokkaido Electric
Power Company yang dalam periode
kedepan akan memasuki pemeriksaan
periodik dengan dinonaktifkan pada 5
Mei mendatang. Informasi terbaru dari
Kabinet Perdana Menteri Yoshihiko
Noda mengeluarkan pernyataan
bahwa dua unit PLTN yang tidak
beroperasi karena dinonaktifkan untuk
keperluan pengecekan keselamatan
milik Kansai Electric Power Company
telah selesai dan siap memasok
kembali energi bagi keperluan musim
panas ini dalam keadaan aman. 7 Unit
3 dan Unit 4 PLTN Ohi di Prefektur
Fukui di Jepang Barat merupakan unit
reaktor pertama yang mungkin dapat
dioperasikan kembali setelah berhenti
beroperasi
karena
permintaan
khusus atau pengecekan berkala
reaktor di seluruh Jepang. Keputusan
terbaru tersebut disambut hangat
oleh pemimpin bisnis di wilayah
Jepang Barat. Shigetaka Sato yang
merupakan Ketua Kamar Dagang dan
Industri Osaka mengatakan bahwa
hal ini merupakan langkah besar
bagi pengoperasian kembali unit-unit
PLTN di Jepang. Menurut Sato, bagi
Jepang, energi nuklir adalah kunci bagi
keamanan pasokan energi di Jepang.
Sementara itu, kepala Pemerintah
Daerah Kyoto Keiji Yamada dan
Pemerintah Daerah Shiga Yukiko
Kada masih enggan dengan keputusan
pengoperasian kembali tersebut.7
Perdana
menteri
Noda
menegaskan
kembali
bahwa
keputusan terbaru bukan berarti
bahwa kebijakan pemerintah untuk
mengurangi kebergantungan energi
nasional pada energi nuklir di masa
datang telah berubah. Edano Noda
mengatakan jika pemerintah tidak
berhasil mendapatkan pengertian dari
masyarakat untuk mengoperasikan
kembali reaktor Ohi, maka
masyarakat dan perusahaan daerah
area listrik Kansai akan diminta
untuk mengurangi penggunaan
listrik sekitar 20% atau lebih pada
musim panas mendatang. 7
Program Nuklir
Dunia Pasca Tragedi
Fukushima
Sampai 5 Desember 2011 atau 8
bulan setelah tragedi Fukushima,
sebanyak 434 unit PLTN di sekitar
30 negara di dunia masih beroperasi
dengan total kapasitas terpasang
367,7 Gigawatt. Sekitar 64 unit
PLTN saat ini sedang dalam proses
pembangunan dengan mayoritas
dibangun di Asia dan sekitar enam
unit PLTN telah tersambung ke
jaringan listrik (on-grid) pada tahun
2011, seperti PLTN Kaiga-4 di India,
PLTN Lingao-4 dan CEFR di Cina,
Chashma-2 di Pakistan, Bushehr-1
di Iran dan Kalini-4 di Rusia.9
Selama tahun 2010-2011, sekitar
15 unit reaktor di Asia dan tiga unit
reaktor dibelahan bumi lainnya telah
memulai proses pembangunan dan
11 unit telah tersambung ke jaringan
listrik.
Pascatragedi
Fukushima,
dalam bulan September 2011, Iran
telah mengizinkan PLTN pertamanya
Vol 19 No.4
9
Topik Utama
beroperasi. Beberapa negara lain
seperti UEA dan Turki juga telah
memesan PLTN unit pertamanya
ke Korea dan Rusia, dan negaranegara tersebut terus melanjutkan
program nuklirnya. Belarus sebagai
salah satu negara yang daerahnya
terkontaminasi radiasi dari tragedi
Chernobyl,
menandatangani
kesepakatan antarpemerintah pada
bulan Oktober 2011 untuk PLTN
pertama mereka, sebagaimana
Bangladesh
juga
melakukan
penandatanganan kerja sama yang
serupa. Di ASEAN, Vietnam telah
menandatangani kontrak pinjaman
dana untuk pembangunan PLTN
pertamanya pada bulan Desember
2011.9 Sementara itu di Eropa,
Inggris
telah
menandatangani
kontrak dengan Prancis untuk
pembangunan satu unit PLTN baru
yang siap menyerap tenaga kerja 350
orang.10
Masih di tahun yang sama,
organisasi dunia untuk energi nuklir
atau International Atomic Energy
Agency (IAEA) telah mencatat
dan mengklasifikasikan berbagai
program nuklir di beberapa negara
terutama pasca Fukushima. Tiga
negara telah memesan unit PLTN
baru yaitu Belarusia, Turki dan Uni
Emirat Arab (UEA), dan enam
negara lain telah memutuskan serta
memulai persiapan infrastruktur
dalam rangka pembangunan PLTN
mereka, di antaranya Bangladesh,
Yordania, Vietnam, Polandia, Mesir
dan Nigeria. Sementara tujuh
negara lainnya telah dengan aktif
mempersiapkan program nuklir
mereka untuk pembangunan PLTN
di masing-masing negara namun
belum mempunyai keputusan yang
final terhadap keputusan kapan
membangun PLTN pertama mereka,
10
Vol 19 No.4
di antaranya Chili, Indonesia,
Malaysia, Maroko, Thailand dan
Arab Saudi. Sebanyak 15 negara, di
antaranya juga di benua Asia dan
Afrika, mempertimbangkan dan
berniat mengembangkan program
nuklir. Sementara delapan negara
lainnya tidak mempunyai rencana
membangun PLTN di dalam negeri,
namun telah menginformasikan
ketertarikannya dalam program
nuklir di masing-masing negara
tersebut.
Sekitar
empat
negara
pascatragedi
Fukushima
telah
membatalkan
program
nuklir
mereka, di antaranya Kuwait, Italia,
Venezuela dan Senegal. Sementara
itu, di saat bersamaan, IAEA juga
telah melakukan sebuah kaji ulang
terhadap infrastruktur energi nuklir
terintegrasi bagi beberapa negara
untuk melihat dan meninjau status
negara-negara
tersebut
dalam
mempersiapkan program nuklir
mereka khususnya bagi negara yang
telah bersedia dan tertarik untuk
membangun unit PLTN di negara
mereka seperti pada tahun 2009 di
Yordania, Indonesia dan Vietnam,
tahun 2010 di Thailand dan 2011
di UEA dan Bangladesh serta kaji
ulang lanjutan bagi Yordania, dan
tahun 2012 juga dilakukan kaji ulang
untuk Polandia dan Belarusia,untuk
beberapa misi tambahan. 10
Menanggapi
kejadian
Tragedi
Fukushima,
beberapa
negara yang mempunyai program
nuklir dan mengoperasikan serta
mengembangkan PLTN di negara
mereka telah memberikan tanggapan
yang cepat, seperti Prancis segera
membuat penilaian (assesment)
keselamatan
fasilitas
nuklir
khususnya PLTN, yang telah dimulai
23 Maret 2011 sebagai bagian dari
perintah perdana menteri Perancis
untuk meng-audit instalasi-instalasi
nuklir negara tersebut. Sementara itu
Korea juga melakukan tindakan yang
sama dalam menanggapi Tragedi
Fukushima dengan melakukan
inspeksi khusus ke fasilitas-fasilitas
nuklirnya, termasuk memasukkan 50
rencana jangka pendek dan panjang
negeri tersebut dalam program nuklir
mereka.11
Dalam estimasi dan proyeksi
IAEA,
diprediksikan
bahwa
pasca Fukushima ini, program
nuklir dunia akan mengalami
kemajuan yang lambat atau
bahkan mengalami keterlambatan,
namun kejadian tersebut tidak
menurunkan pertumbuhan dan
kenaikan program energi nuklir
dunia. Fakta yang menarik adalah,
beberapa negara pendatang baru
dalam energi nuklir lebih tertarik
untuk mengembangkannya dan
minat negara-negara tersebut masih
tinggi.9
Topik Utama
Referensi
1. INES (the International Nuclear and Radiological Event Scale) Rating on the
Events in Fukushima Dai-ichi Nuclear Power Station by the Tohoku District
- off the Pacific Ocean Earthquake. Ministry of Economy, Trade and Industry.
News relese, April 12, 2011.
URL: http://www.nisa.meti.go.jp/english/files/en20110412-4.pdf, diakses 5
Mei 2011.
2. Over 30% of listed firms report damage, NHK Report, 25 Maret 2011.
3. Impact to TEPCO's Facilities due to Miyagiken-Oki Earthquake. Tokyo Electric Power Company (TEPCO) Press Release (Mar 11, 2011).
URL: http://www.tepco.co.jp/en/press/corp-com/release/11031105-e.html,
diakses 2 Mei 2011.
4. Sidik Permana, Krisis Nuklir Fukushima Dai-ichi, Vol. 19 | XXII | Juli 2011,
INOVASI online.
5. Information on status of nuclear power plants in Fukushima. Japan Atomic
Industrial Forum. April 13, 2011.
URL: http://www.jaif.or.jp/english/news_images/pdf/
ENGNEWS01_1302693266P.pdf, diakses 4 Mei 2011.
6. http://www.jaif.or.jp/english/news_images/pdf/
ENGNEWS02_1333964448P.pdf (diakses 14 April 2012)
7. http://mainichi.jp/english/english/newsselect/
news/20120414p2g00m0dm004000c.html (diakses 14 April 2012)
8. http://www.japan-press.co.jp/modules/news/index.php?id=2992 (diakses 16
April 2012)
9. IAEA, The World Nuclear Energy Prospects after Fukushima Accident : IAEA
Projections. IAEA and Nuclear “New Comers”, IAEA Action Plan on Nuclear
Safety, International Conference of GLOBAL 2011, December 12-15, 2011,
Chiba, Japan.
10. http://uk.reuters.com/article/2012/02/20/uk-kier-idUKTRE81J0C220120220 (diakses 15 April 2012).
11. Soon Heung Chang, Perspective on Korean Nuclear Energy after the Fukushima Accident, International Conference of GLOBAL 2011, December 12-15,
2011, Chiba, Japan.
Vol 19 No.4
11
Topik Utama
Meninjau Kembali PLTN
dan
Kebijakan Energi
di Jepang
Muhammad Kunta Biddinika*1 dan Muhammad Aziz2,
Dompet Dhuafa Foundation, Japan Operation Office
Advanced Energy Systems for Sustainability, Tokyo Institute of Technology
*Email: [email protected]
1
2
B
erbagai kalangan menunggu apa langkah jangka menengah
dan panjang yang akan diambil oleh Jepang pasca-kecelakaan
Pembangkit Listrik Tenaga Nuklir (PLTN) Fukushima Dai-ichi.
Di tengah berbagai tekanan publik dalam negerinya, Jepang disinyalir akan
mengikuti langkah Jerman untuk segera menghentikan program nuklir
sebagai pembangkit listrik.
12
Vol 19 No.4
Topik Utama
Tulisan ini ingin menyuguhkan
gambaran nyata di Jepang yang
bersumber dari data-data nyata dan
valid serta menjauhkan dari sumbersumber yang tendensius dan kental
nuansa politisnya. Dengan demikian,
meluruskan pandangan terhadap apa
yang sebenarnya terjadi di Jepang,
termasuk kebijakan seperti apa
yang realistis untuk menjadi pilihan
terbaik bagi Jepang saat ini dan yang
akan datang.
Sebagai gambaran, pasokan
energi primer untuk pembangkit
listrik di Jepang hingga sebelum
terjadi bencana tsunami yang lalu
terdiri dari 29.3% nuklir, 29.3%
LNG, 24.9% batu bara, 9.3% air, 7.1%
minyak dan 1.1% energi baru. 1 PLTN
menjadi penyangga utama beban
dasar (baseload) di Jepang saat ini,
mengingat skala dan kemampuannya
yang bisa menghasilkan listrik
dengan relatif stabil.
Jepang merupakan negara
yang miskin cadangan energi primer.
Mau tidak mau, negara ini harus
mengimpor sumber energi utamanya
dari negara-negara lain, termasuk
Indonesia. Sebagai gambaran, total
konsumsi listrik di Jepang per tahun
mencapai 7 kali lipat dari Indonesia,
meski jumlah penduduknya lebih
sedikit. 2
Pandangan Masyarakat
Jepang Setelah Bencana
Gempa yang disertai tsunami
di daerah Jepang utara berdampak
besar baik secara psikologis maupun
ekonomis pada masyarakatnya. Dari
hasil survey yang dilakukan oleh
beberapa media di Jepang setelah
terjadinya bencana, baik oleh koran
(yakni: Tokyo Shinbun, Yomiuri
Shinbun, dan Asahi Shinbun)
maupun stasiun televisi (yakni: TBS
dan NHK), lebih dari 50 persen
masyarakat Jepang masih setuju
nuklir sebagai salah satu pilihan
pembangkit listrik. Termasuk yang
banyak diharapkan oleh masyarakat
melalui survey tersebut adalah tetap
meneruskan pengoperasian PLTN
setelah adanya penguatan struktur
bangunan maupun reaktor terhadap
ancaman bencana gempa dan
tsunami.
Masyarakat Jepang meyadari
benar negaranya sangat minim akan
cadangan energi primer. Di sisi lain,
mereka sangat membutuhkan energi
listrik dalam jumlah besar untuk
menjalankan
perekonomiannya.
Industri-industri berat yang menjadi
kunci perekonomian Jepang pun
sempat berhenti akibat kurangnya
pasokan listrik. Hal ini berimbas
ke perekonomian Jepang yang
sempat berada pada taraf yang
mengkhawatirkan pasca gempa dan
tsunami.
Dari hasil survey serta
kenyataan di lapangan, bisa diketahui
betapa tenang dan sadarnya orang
Jepang dalam menentukan pilihan
pasokan energi mereka meski
setelah terjadinya kecelakaan PLTN
Fukushima Dai-ichi. Memang
persentase dukungan terhadap
PLTN berkurang setelah terjadinya
gempa, namun angka pengurangan
itu belumlah membuat dukungan
menjadi berbalik.
Harga listrik di Jepang per
kWh untuk tiap-tiap sumbernya
adalah: nuklir 5.3 Yen, LPG 6.2 Yen,
air 11.9 Yen, batubara 5.7 Yen, dan
minyak 10.7 Yen. 3 Dari sini, bisa kita
ketahui bahwa nuklir merupakan
sumber energi termurah dibanding
sumber energi lainnya. Begitu
juga emisi karbon dioksida (CO2)
pembangkit dari nuklir sangatlah
kecil, sekitar 22 g-CO2/kWh. Angka
ini hampir sama dengan emisi dari
Pembangkit Listrik Tenaga Air
(PLTA) (18 g-CO2/kWh) dan lebih
kecil dibanding Pembangkit Listrik
Tenaga Surya (PLTS) yang besarnya
adalah 59 g-CO2/kWh.4
Kebijakan Energi
ke Depan
Beberapa tahun ke depan,
pola dasar pasokan energi utama
untuk listrik diperkirakan tidak akan
jauh dari pola dasar, yaitu 30% nuklir,
30% batubara, 20% gas dan minyak,
dan 20% energi terbarukan. Meski
belum diputuskan secara resmi,
nuklir diperkirakan akan memberi
pasokan sekitar 23-25%. Paling tidak
itu akan menjadi pola dasar di Jepang
tahun 2020-2030.
Nuklir akan tetap menjadi
salah satu pilihan untuk beban dasar
pembangkitan listrik di Jepang pada
beberapa dekade ke depan. Fokus
utamanya adalah memperbaiki dan
meningkatkan ketahanan bangunan
dan reaktor terhadap gempa dan
tsunami untuk beberapa tahun
ke depan ini. Meski demikian,
peningkatan keselamatan operasi
reaktor sendiri juga tidak lepas dari
perhatian.
Untuk batubara, teknologi
Integrated Gasification Combined
Cycle (IGCC) yang mampu
membangkitkan listrik dengan
efisiensi hingga 50% akan diterapkan.
Selain itu, teknologi lain untuk
meningkatkan efisiensi Pembangkit
Listrik Tenaga Batubara (PLTB)
seperti Integrated Gasification Fuel
Cell (IGFC), maupun teknologi
Vol 19 No.4
13
Topik Utama
penangkapan karbon (Carbon
Capture Storage (CCS)) guna
mengurangi emisi karbon juga
sedang giat-giatnya dikembangkan. .
Perubahan yang sangat
signifikan terjadi di energi terbarukan,
seperti air, panas bumi, biomassa,
surya, dan angin, yang diprediksi
akan naik hingga 20%. Hanya saja,
teknologi untuk menjadikan energi
terbarukan tersebut sebagai penopang
beban dasar pembangkitan listrik
di Jepang masih sangat jauh dari
harapan. Hal ini berkaitan dengan
skalanya yang kecil dan sifatnya yang
fluktuatif sehingga mempengaruhi
kualitas frekuensi dan voltase listrik
yang dihasilkannya. Apalagi standar
kualitas voltase di Jepang sangat ketat
yaitu 94-106 Volt.
Di antara beberapa sumber
energi terbarukan tersebut, PLTS
menjadi salah satu target utama
sumber listrik dan panas di Jepang
untuk beberapa tahun mendatang.
Baik yang berskala kecil seperti Solar
Home System (SHS) ataupun besar.
Hambatan yang ditemui adalah
biayanya yang mahal dan lahan yang
dibutuhkan juga luas. Luas lahan ini
yang sulit diperoleh untuk negara
seperti Jepang.
Selain itu, sifat fluktuatif
tenaga surya baik karena cuaca dan
juga perbedaan siang dan malam,
membuatnya tergantung pada
teknologi-teknologi penyimpan dan
pengatur listrik seperti baterai, fuelcell
dan lain-lain. Sayangnya, jumlah
fasilitas dan teknologi penyimpanan
listrik ini masih jauh dari harapan.
Teknologi baru seperti Vehicle to
Grid (V2G) yang memanfaatkan
mobil listrik atau hibrida sebagai
penyimpan sekaligus penyedia listrik
juga masih dalam tahapan riset.
Semua pertimbangan di atas
14
Vol 19 No.4
membuat energi di Jepang hingga
dua dekade mendatang masih
didominasi oleh peran nuklir. Mau
tidak mau, hal ini dikarenakan
kondisi perekonomian Jepang sangat
bergantung kepada kestabilan dan
keamanan pasokan energi.
Referensi:
1.The Federation of Electric Power Companies of Japan, 2011, Electricity Review Japan
2011, http://www.fepc.or.jp/english/library/electricity_eview_japan/__icsFiles/
afieldfile/2011/01/28/ERJ2011_full.pdf, diakses 1 Mei 2012
2.Wikipedia, 2012, List of countries by electricity consumption, http://en.wikipedia.org/wiki/
List_of_countries_by_electricity_consumption#cite_note-CIA-0, diakses 1 Mei 2012
3. Matsuo, Y., Nagatomi, Y., Murakami, T., 2011, Thermal and Nuclear Power Generation Cost
Estimates Using Corporate Financial Statements, http://eneken.ieej.or.jp/data/4103.pdf
(diakses 1 Mei 2012)
4. World Nuclear Association, 2012, Comparative Carbon Dioxide Emissions from Power
Generation http://www.world-nuclear.org/education/comparativeco2.html (diakses 1 Mei
2012)
Artikel Lingkungan
Municipal Solid Waste Treatment
Using Hydrothermal Process
to Produce a Renewable Energy Source
Pandji Prawisudha, Srikandi Novianti
Dept. of Environmental Science and Technology,
Tokyo Institute of Technology, Japan
E-mail: [email protected]
Abstract
An alternative treatment by converting Municipal Solid Waste (MSW) to alternative fuel by employing hydrothermal treatment in 1 ton/batch capacity commercial scale plant was conducted by applying medium
pressure saturated steam at approximately 215 °C in a stirred reactor for 30 minutes. The process yielded
uniform pulp-like product with four-fold increasing density to reach 75% waste volume reduction. The product also showed an average heating value of 18 MJ/kg, which is similar to that of low-grade sub-bituminous
coal. The energy balance calculation revealed that the required energy for the hydrothermal treatment was as
low as one-ninth of the energy content in the product, which indicates that the hydrothermal treatment is a
self-sustain system and requiring lower energy than conventional waste-to-fuel treatment processes. It can be
concluded that the hydrothermal treatment would be a viable way to treat and convert the MSW to alternative renewable energy source.
Key words: renewable energy, waste treatment, hydrothermal process, alternative fuel
©2011. Persatuan Pelajar Indonesia Jepang. All rights reserved.
1. Introduction
Due to the increasing price of crude oil, many industries
are now using coal; this has resulted in a significant
increase in the demand and also in the price of coal, which
increased by 150% in 2008.1 As a result, an alternative solid
fuel is greatly needed to replace or partially substitute coal
as the main fuel. On the other hand, municipal solid waste
(MSW) has become a severe problem in many developing
and developed countries due to the limited lifetime of final
waste disposal facilities. In Japan for example, the lack of
final disposal facilities is still a major concern because the
remaining lifetime of those in use is only 18 years.2
Current
waste
treatment
technologies,
however, are still not able to eliminate the waste while
meeting three conditions: environmentally friendly,
economically feasible, and high processing capacity. The
aforementioned conditions should promote the usage of
MSW as an alternative solid fuel. Treating the MSW can
solve the problem of the large quantities of MSW, and
Vol 19 No.4
15
Artikel Lingkungan
thus the treated MSW can be supplied as solid fuel for the
industries.
Despite the advantages of MSW as a fuel, its
very high moisture content and irregular form make it
difficult to implement. To overcome these problems,
an innovative hydrothermal treatment technology has
been developed in Tokyo Institute of Technology for
converting high moisture content solid wastes into dried
uniform pulverized coal-like solid fuel using low energy
consumption.3 The process differs from other processes
such as super and sub-critical water oxidation, not only
in the aspect of temperature but also in its water phase,
which uses the saturated steam. This configuration will
lead to lower capital cost to build the treatment plant.
A commercial scale plant of the process, shown
in Figure 1 implements a closed loop system principle.
The 3 m3 reactor is capable of handling up to one ton of
waste per batch, generally applying saturated steam while
mixing the waste and steam using rotor blades driven by a
30 kW electric motor.
At the beginning of the process, waste is fed into
the reactor to undergo the hydrothermal reaction with
saturated steam supplied from the oil-fired boiler, while
being stirred by a rotor unit to obtain homogeneous
products. After reaching the target temperature, the
reactor is set to maintain the temperature for a certain
holding period. Finally, bleeding the pressurized steam
to the condenser until the reactor reaches atmospheric
pressure completes the hydrothermal treatment. The
products can then be obtained by rotating the stirrer
that serves as a screw conveyor. A total of three hours is
required to complete the process.
In this paper, performance of the hydrothermal
treatment on MSW to produce a renewable energy source
will be presented. Typical energy balance calculation of
the treatment will also be described and compared with
conventional waste treatment.
2. Experimental method
2.1. Hydrothermal treatment experiment
The hydrothermal treatment experiments on MSW were
conducted in Japan using the facility described above.
Sampling of the raw MSW was based on the Japanese
Raw MSW
Fuel
Boiler
Steam
Reactor
Rotor Unit
Water
Product
Water
Treatment
Figure 1. Hydrothermal treatment plant diagram
16
Vol 19 No.4
Condenser
Artikel Lingkungan
standard JIS K0060. Three process parameters, i.e., (1) the
pressure and its related steam temperature, (2) the holding
period, and (3) the amount of MSW, were presented in
Table 1. The optimum experimental condition was chosen
from the results of previous hydrothermal experiments.4
During the experiment, the reactor temperature, the water
and fuel consumption, and the motor power consumption
data were taken at certain intervals to obtain energy and
mass balance and the operating characteristics of the
hydrothermal treatment. The final solid products and the
condensed water were then collected in sealed containers
for analysis.
Table 1. MSW Experimental
Parameter
Pressure
Unit
MPa
Value
2.0
Temperature
Holding period
MSW mass
°C
min
kg
215
30
705
2.2. MSW and product composition
analysis
Physical characteristics of the raw MSW and the
hydrothermally treated products were analyzed based on
the Japanese Ministry of Health and Welfare regulation
no.95/1977. Moisture contents were obtained by drying
the crushed samples in a constant-temperature oven at
105 °C, and the ash contents were obtained by inserting
the samples into a constant-temperature oven maintained
at 550 °C. The mass of the combustible content was
obtained from the difference between mass of the dried
samples and their ash weight.
2.3. Heating value analysis
To confirm its feasibility as a fuel, the heating values of the
raw MSW and the hydrothermally treated products were
analyzed according to the Japanese standard JIS M8814.
The samples were dried at 105 °C, crushed, and each
sample were analyzed by using Shimadzu CA-4PJ bomb
calorimeter.
3. Experiment results
3.1 Composition and heating value of raw
MSW and product
The raw MSW and product characteristics are shown in
Table 2. Raw MSW and Product Characteristics
Parameter
Density
Moisture content
Combustible content
Ash content
Higher Heating
Value
Unit
g/cm3
%
%
%
MJ/kg
Raw
MSW
0.15
33
50
17
18
Treated
Product
0.61
44
46
10
18
Table 2. It can be seen that the density of hydrothermallytreated product was higher, approximately four times than
that of the raw MSW in dry form. It can be predicted that
75% waste volume reduction by hydrothermal treatment
can be achieved.
The product exhibited higher moisture content, but
almost equal heating value in the dry form compared to
that of the raw MSW. The heating values of the raw MSW
and the hydrothermally treated product were almost equal
to that of low-grade sub-bituminous coal (approximately
19 MJ/kg5). Thus, it is possible to use the hydrothermallytreated products as a coal alternative solid fuel.
3.2 Product appearance
The hydrothermal treatment process has produced grayish
uniform slump products, as shown in Figure 2. It was
reported that at higher reaction temperature and longer
holding period, the hydrothermally-treated products
became more uniform.4
The uniformity and visually smaller particle size can
be explained by the fact that the pressure and temperature
in the hydrothermal process would result in the particle
breakage of MSW as shown in Figure 3.
Vol 19 No.4
17
Artikel Lingkungan
Figure 2. Raw MSW (left) and product (right)
Figure 3. Particle breakage due to hydrothermal treatment
4. Performance analysis
4.1 Energy balance of the treatment
The measured data shown in Figure 4 along with the
operational data shown in Table 3 can be summarized to
obtain the total energy balance of the system. Calorific
value of the dry solid waste was considered as the datum
(100%) for the energy balance calculation. The energy
required for the boiler can be calculated based on the
water and steam enthalpy difference through the boiler,
combined with the boiler efficiency and utility. The energy
loss to the condenser and the water content in the product
were also calculated based on the product’s temperature.
The heat loss was obtained from the difference in the total
18
Vol 19 No.4
energy balance. The general formula used to calculate the
total mass and energy balance are shown in equations
below.
The total mass balance can be assumed as the
following equation:
where :
= dry MSW input mass
= water mass in the MSW, from the
moisture content of MSW
= steam flow entering the reactor
= product mass
= condensed steam from the reactor to
the condenser
Artikel Lingkungan
The energy balance equation can be derived as:
...(1)
while the “Energy from steam” can be obtained from:
...(2)
and the “Energy for steam injection phase” can be
derived as:
...(3)
along with the “Energy for holding period”, which can be
obtained from:
...(4)
Fuel
300
Water
Temperature
Motor Ampere
200
200
150
150
100
100
50
50
0
Temperature (oC)
Motor Ampere (A)
Fuel Consumption (l)
Water Consumption (l)
250
250
0
0
3
16
31
34
57
64
71
Time (min)
Figure 4. Measured data from hydrothermal treatment plant
Vol 19 No.4
19
Artikel Lingkungan
Table 3. Operational Data of Hydrothermal Treatment System
Parameter
Processing time
Input raw MSW
Output product
Fuel consumption
Average boiler utility
Average motor
ampere
Water consumption
Steam temperature
Steam pressure
Condensed water
Condenser
temperature
Unit
min
kg
kg
l
kW
Value
61
705
760
45
6.3
the energy content in the treated MSW. This means that
this hydrothermal treatment process can utilize its own
product as the energy source required to run the process
and in the same time produce net solid fuel products.
A
35
L
°C
MPa
1
458
215
2
320
°C
80
Three main parameters of treatment capacity, product
characteristics and processing energy requirement were
considered in the comparative analysis. From Table 4, it
can be observed that when comparing with other waste
treatment to produce fuel, the hydrothermal treatment
is superior mostly in the term of plant size and energy
requirement. Combined with the flexibility in raw
material input and high calorific value of the product,
the hydrothermal treatment can be a good candidate for
MSW treatment in the current waste disposal system to
produce an alternative renewable energy.
The total energy balance is shown in Figure 5. It
can be seen that the typical amount of energy required to
treat the MSW was approximately one-ninth (11.3%) of
Figure 5. Total energy balance of hydrothermal treatment plant
20
Vol 19 No.4
4.2 Comparison with other waste
treatments
Artikel Lingkungan
Table 4. Comparison between Various Waste-to-Fuel Treatment System
Parameter
kg/m3
%
MJ/kg
3
solid pulp
610
75
18
Anaerobic
Digestion6,7
organic
sorter
crusher
digester
mixer
1-3 weeks
gas
1.23
0
16-20
%
100
12-25
100
%
11.3
20-40
8-10
Unit
Raw MSW input
reactor
steam generator
Equipment list
Retention time
Product type
Product density
Volume reduction
Product’s calorific
value
Product/raw MSW
Energy requirement
per energy content in
the product
Hydrothermal
Treatment
any
hour
5. Conclusions
An innovative hydrothermal treatment for MSW has been
developed, and a commercial-scale plant experiment has
been conducted to obtain its performance for converting
the MSW into alternative fuel. Uniform, pulp-like
products were produced from the reactor. The calorific
values of the hydrothermally-treated products were not
substantially altered, almost equal to that of low-grade
sub-bituminous coal.
Energy balance calculation revealed that the
required energy for the hydrothermal treatment was as
low as one-ninth of the energy content in the product,
which indicates that hydrothermal treatment is a selfsustain system and requiring lower energy input than
other waste-to-fuel treatment processes.
Considering its advantages, the hydrothermal
treatment can be considered a viable way to treat and
convert the MSW to alternative renewable energy source.
6. Future applications
RDF Pelletizing8
any
sorter
dryer
crusher
pelletizer
continuous
solid pellet
300-700
50-80
12-16
sludge9, as shown in Figure 6. This way the process can
be applied to treat almost any kind of waste (unutilized
resources), resulted in total recycling of material from
human activities.
Usable
Products
Unutilized
Resources
- MSW
- Agricultural Waste
- Sewage Sludge
- Animal Manure
Solid
Fuel
Hydrothermal
Treatment
Liquid
Fertilizer
Livestock
Feed
- Food Residue
- Organic Waste
Figure 6. Application of hydrothermal treatment
Added with the capability of the treatment to
convert the organic chlorine in the plastic into inorganic,
water-soluble chlorine that can be water washed10, the
hydrothermal treatment can produce a clean fuel from
plastic waste, which is a major problem of MSW usage as
fuel.
The hydrothermal treatment is able to convert not only
MSW to solid fuel but also from other unutilized resources
to other usable products such as fertilizer from sewage
Vol 19 No.4
21
Artikel Lingkungan
References
The Asahi Shimbun. Coal prices surging due to global demand,
Australia flooding. March 4th 2008.
Ministry of Environment Japan. State of discharge and treatment
of municipal solid waste in FY 2008. 2010.
Sato K, Jian Z, Soon JH et al. Studies on fuel conversion of high
moisture content biomass using middle pressure steam. Proc.
Thermal Eng. Conf.: G132. 2004.
Prawisudha P. Conversion of Municipal Solid Waste into coal
co-firing fuel by hydrothermal treatment. 2nd AUN/SEED-Net
RCNRE: #G_005. 2010.
American Society for Testing and Materials (1999) Gaseous
Fuels: Coal and Coke; in Annual Book of ASTM Standards.
Braber K. Anaerobic digestion of Municipal Solid Waste: A
modern waste disposal option on the verge of breakthrough.
Biomass & Bioenergy. 1995(9):365-376.
Macias-Corral M, Samani Z, Hanson A et al. Anaerobic
digestion of municipal solid waste and agricultural waste and
the effect of co-digestion with dairy cow manure. Bioresource
Technology. 2008(99): 8288-8293.
Caputo AC, Pelagagge PM. RDF production plants I: Design
and costs. Applied Thermal Engineering. 2002(22): 423-37.
Jambaldorj G, Takahashi M, Yoshikawa K. Liquid fertilizer
production from sewage sludge by hydrothermal treatment.
Proc. Int’l Symp. on EcoTopia Science. 2007.
Prawisudha P, Namioka T, Lu L et al. Dechlorination of
simulated plastic waste in lower temperature employing
hydrothermal process and alkali addition. J. of Environmental
Science & Engineering. 2011 Vol. 5(4): 432-439.
22
Vol 19 No.4
Artikel Riset
Pemenang 1 TICA Cluster II 2011
Development of Indonesian
Automated Document Reader:
Evaluation of Text Segmentation Algorithms
Teresa Vania Tjahja1, Anto Satriyo Nugroho2
1
Faculty of Information Technology/Swiss German University,
2
Center for Information and Communication Technology (PTIK)/
Agency for the Assessment and Application of Technology (BPPT)
E-mail: [email protected], [email protected]
Abstract
I
ndonesian Automated Document Reader (I-ADR) is an assistive technology for people with visual
impairment, which converts textual information on paper documents to speech. This research is
conducted to develop a prototype of I-ADR featuring Optical Character Recognition (OCR), Text
Summarization, and Text-to-Speech (TTS) Synthesizer modules. The main focus is the Text Segmentation
module as an integral part of OCR. In this study, Text Segmentation algorithms for grayscale and color images
are developed and evaluated. Text segmentation for grayscale images uses an improved version of Enhanced
Constrained Run-Length Algorithm (CRLA)1, while segmentation for color images employs Multivalued
Image Decomposition algorithm2 combined with the improved Enhanced CRLA. Based on the experiments,
the success rate for grayscale images is 100% and for color images is 96.35%.
Keywords: visual impairment, text segmentation, text summarization, text-to-speech synthesizer, OCR
1. Introduction
As technology advances, more documents are converted
to or available in their electronic forms. However, paper
remains as the most common medium for carrying information, especially in developing countries such as Indonesia. Unfortunately, that kind of information is not available for people with visual impairment. To improve the
accessibility of visually-impaired people to obtain textual
information on papers, the Agency for the Assessment
and Application of Technology develops Indonesian Automated Document Reader (I-ADR), which consists of
4 main modules as shown in Figure 1: voice-based user
interface, Optical Character Recognition (OCR), Text
Summarization, and Text-to-Speech (TTS) Synthesizer.
This research is focused on Text Segmentation module as
an integral part of the OCR module.
The next sections of this paper are organized as follows: section 2 explains the proposed I-ADR system and
section 3 provides experimental results and discussions,
which will be concluded in section 4.
Vol 19 No.4
23
Artikel Riset
2. Proposed System
In this research, we have integrated three of the four main
modules: OCR, Text Summarization, and TTS Synthesizer. In this section, each module will be discussed, emphasizing on Text Segmentation in OCR module.
Figure 1 Indonesian Automated Document Reader modules
2.1. Optical Character Recognition (OCR)
The current I-ADR system accepts document images
(both color and grayscale) as inputs, locates and extracts
text from the image, and produces a speech based on the
extracted text. The OCR module has a pre-processing
submodule, which aims to remove noise with median
filter and simplify grayscale image representation with
binarization using Otsu’s thresholding scheme. The resulting binary image is then passed to Text Segmentation
submodule.
result
(a)
(b)
Figure 2 Result of Multivalued Image Decomposition algorithm: (a)
Original image, (b) Bit-dropping and color quantization
24
Vol 19 No.4
Figure 3 Multivalued Image Decomposition results
For color document images, the algorithms starts
with Multivalued Image Decomposition algorithm2 for
simplifying the color image representation. The algorithm
has 3 main steps: bit-dropping, color quantization, and
decomposition of the image based on colors. The result of
the decomposition process (shown in Figure 2 and Figure
3) are several images, each of which containing objects of
a particular color. Since an image from the decomposition
process has only two colors, foreground and background,
the decomposition results are then converted into binary
grayscale images (with only two intensities). The next
step is to extract text from each of those images.
Text extraction from binary images is performed
using Enhanced Constrained Run-Length Algorithm
(CRLA)1. The algorithm basically groups objects in an
image into homogeneous regions based on their size. For
instance, characters in a text line may form a homogeneous region, while a picture forms another region. These
regions will be examined further to be classified as text
or non-text. For a more accurate result, the examination
should be performed on each individual regions, which is
not described particularly in the Enhanced CRLA paper.
Artikel Riset
To obtain each regions, the general idea is to scan
through the image and use histogram analysis to determine the starting and ending points of each individual
regions. With histogram analysis, we basically count the
number of foreground and/or background pixels in a particular area (row, column, etc.). When scanning through
the image, a row with foreground pixels is considered as
the starting row of a region, and the first row without foreground pixels after the starting row is considered as the
ending row. For each range of rows (from starting to ending rows), the starting and ending columns of the regions
are determined in the similar manner as the rows.
Figure 4 illustrates the text segmentation problem
encountered during our research. Consider Figure 4(a) as
a part of document image. The grey area represents picture, while the black ones represents text. If we perform
the commonly used raster scan (vertical and horizontal),
the results are Figure 4(b) and (c), with the candidate
individual region in Figure 4(c) still covers several text
lines. To further separate those text lines, one more vertical scan should be performed for each candidate regions.
The results are Figure 4(d) and (e), where the picture is
still maintained as one region, while the text lines can be
separated.
A more complex example is given in Figure 5. Figure 5(a) illustrates a part of a document image. Again,
the gray areas represent pictures and the black ones text
lines. If three scans as used for Figure 4(a) is performed
for Figure 5(a), the results are shown in Figure 5(b) and
(c), with the lower-left candidate region still consists of
more than one individual region. In this case, we need
two more scans: one horizontal scan to separate picture X
(a)
(b)
(c)
Figure 5 Illustration of more complex text segmentation problem
X
(a)
Y
Figure 6 Candidate region in Figure 5(b) in larger view
(b)
(d)
(c)
(e)
Figure 4 Illustration of simple text segmentation problem
from picture Y and the text line, and another vertical scan
to separate picture Y from the text line (see Figure 6).
For the implementation in program code, the
number of scans along with the scanning order needs to
be determined. However, considering the two previous
examples, the scans required for arbitrary layout may
vary infinitely. Therefore, we propose that the scans are
performed recursively and the recursion for a candidate
region terminates when the region cannot be divided any
further3. The pseudocode of the proposed recursive scans
is shown in Figure 7. When a candidate individual region
is found, it is classified as text or non-text by calculating its
Mean Black Run-Length (MBRL) and Mean Transition
Vol 19 No.4
25
Artikel Riset
Count (MTC) as explained in1.
The result of Text Segmentation module, which
implements Multivalued Image Decomposition algorithm and Enhanced Constrained Run-Length algorithm
equipped with our proposed recursive scans is a binary
image consisting of only text. This image is then passed to
Character Extraction and Recognition submodule. Individual characters from extracted text are segmented with
histogram analysis. Character recognition itself utilizes
Multilayer Perceptron neural network classifier trained
with back propagation algorithm.
The last submodule of OCR module is the postprocessing submodule, which performs word correction.
Errors often occur during character recognition, resulting
the produced words being meaningless and thus should
be corrected. The post-processing module compares each
recognized words with a list of Indonesian dictionary
words and selects a dictionary word that is most similar
to the recognized word for the correction. The similarity
Figure 7 Pseudocode of the proposed recursive scans
between the two words is calculated with Longest Common Subsequances (LCS) algorithm4.
2.2. Text Summarization
I-ADR integrates MEAD5 with its Indonesian database
provided by SIDoBI (Sistem Ikhtisar Dokumen Bahasa
Indonesia)6 to provide summarization feature. MEAD is a
tool for creating summarization and evaluating results of
other tools. It performs extractive summarization, where
units in documents (sentences and words) are assigned
salience scores which determine whether the units should
be extracted to construct the summary.
2.3. Text-to-Speech Synthesizer
To perform the conversion from text to speech, I-ADR
26
Vol 19 No.4
uses a free and open source speech synthesizer named
MBROLA7 with Indonesian voice database.
3. EXPERIMENTAL RESULTS
AND DISCUSSION
The proposed I-ADR system was evaluated with seven
grayscale images and seven color images obtained by
scanning Indonesian magazine pages of A4 size with a 300
dpi scanner. The experiments were conducted on a laptop
with Intel® Core™ i7 CPU @ 1.7 GHz and 4 GB RAM on
Ubuntu Linux 10.10 platform.
3.1. Text Segmentation Results
The experiments to evaluate Text Segmentation module
are divided into two groups: experiments with grayscale
images to evaluate the algorithm for grayscale images
(from pre-processing to Enhanced CRLA), and experiments with color images to evaluate the algorithm for
color images (from Multivalued Image Decomposition
algorithm to Enhanced CRLA), both equipped with the
proposed recursive scans.
Based on our experiments, without recursive scans,
the text segmentation algorithm achieved only 88% accuracy and took an average of 10 seconds with several text
lines missing. It is because those text lines are contained
in a candidate region mixed with other non-text regions,
and causing its MBRL and MTC fall outside the values
for text regions. After the recursive scans were used, all
text regions were extracted successfully, while requiring
10.09 seconds in average. Meanwhile, text segmentation
algorithm for color images achieved 96.35% accuracy and
required an average of 25 seconds. Figure 8 shows the results of text segmentation algorithm with color images.
3.2. Character Extraction and Recognition
Results
As explained in section 2, character extraction utilizes
histogram analysis and the recognition implements MLP
neural network classifier. The database used for training
the neural network consists of 73 characters (26 lowercase and 26 uppercase alphabets, 10 numbers, and 11
symbols). The overall accuracy for character extraction
and recognition algorithm is 98.31%, which were obtained by manually calculating the number of correctly
Artikel Riset
tion2 and Enhanced CRLA1, equipped with our proposed
recursive scans, and achieved 96.35% accuracy based on
our experiments.
Despite the relatively high success rate, there are
still some issues left for future development, including
the system’s evaluation with large-scale data and improvement of the algorithm to handle non-Manhattan layouts.
Reference
Sun HM. Enhanced Constrained Run-Length Algorithm for
Complex Document Layout Processing. International Journal
of Applied Science and Engineering. Dec. 2006. 4(3): 297309.
Jain AK, Yu B. Automatic Text Location in Images and Video
Frames. In Proc. of 14th International Conference on Pattern
Recognition, 1998, pp. 1497-1499. Brisbane, Qld.
Figure 8 Examples of text segmentation results with the proposed
system
recognized characters. Often, the errors were caused by
imperfect character shapes that were altered during normalization process, which involves scaling the segmented
character image.
3.3. Post-processing Results
The current post-processing module, which performs
word correction and utilizes LCS algorithm, had 94.57%
success rate (obtained by manual observation). The errors occurred in this module were caused by too many
character recognition error in a word, and a word might
be broken into several strings or several words combined
into one due to space insertion errors.
Tjahja TV, et al. Recursive Text Segmentation for Indonesian
Automated Document Reader for People with Visual Impairment. In Proc. of 3rd International Conference on Electrical
Engineering and Informatics (ICEEI 2011), CD-ROM B2-2.
Bandung, Indonesia.
Jones NC, Pevzner PA. An Introduction to Bioinformatics Algorithm. Cambridge, Massachusetts: The MIT Press. 2004.
Radev D, et al. MEAD – A Platform for Multidocument Multilingual Text Summarization. In 4th International Conference on
Language Resources and Evaluation, 2004. Lisbon, Portugal.
Prasetyo B, Uliniansyah T, Riandi O. SIDoBI: Indonesian Language Document Summarization System. In Proc. of International Conference on Rural Information and Communication
Technology, 2009, pp. 378-382. Bandung, Indonesia.
http://tcts.fpms.ac.be/synthesis/mbrola.html [Accessed May
26, 2011]
4. Conclusion
In this research we have developed a prototype of Indonesian Automated Document Reader consisting of 3 main
modules (OCR, Text Summarization, and TTS Synthesizer), with the main focus on text segmentation algorithms
for both grayscale and color images. The text segmentation module combines Multivalued Image Decomposi-
Vol 19 No.4
27
Artikel Riset
Pemenang 2 TICA Cluster II 2011
Optimized Turbo Code VLSI Architecture for
LTE using System Level Modeling and Assertion
Based Verification
Ardimas Andi Purwita1, Trio Adiono2
School of Electrical Engineering and Informatics
Electrical Engineering Department
Institut Teknologi Bandung
Bandung Indonesia
e-mail: [email protected], [email protected]
Abstract
T
urbo code is a high performance channel coding which is able to closely reach the channel capacity of
Shannon limit. It plays an important role to increase the performance in one of the latest standard in
the mobile network technology, such as Long Term Evolution (LTE).1 In this paper, Turbo code VLSI
Architecture for LTE which is co-developed between Ministry of Communication and Information (Menkominfo) of Republic Indonesia and Microelectronics (ME) Laboratory Institut Teknologi Bandung (ITB), is discussed. The optimization is applied to reduce computational complexity and excessive memory requirement as
well as the latency and delay. In order to increase the processing speed, 8-level parallel architecture is proposed.
Furthermore, to increase the processing parallelization, we also applied the Max-log-MAP (Maximum A Posteriori Probability)2, 3, 4 , Sliding Window Algorithm (SWA)5, and dual bank Random Access Memory (RAM)
for interleaver and deinterleaver block. Based on the simulation result, the proposed algorithm (encoder and
decoder) is almost 16 faster than the original algorithm6 and 42 times smaller for the memory requirement (decoder). Additionally, the proposed algorithm reduces the size of interleaver and deinterleaver block by almost
50%. Lastly, in order to shorten the design cycle, the modeling method used to implement the algorithm and
the architecture is System Level Modeling using SystemC. Moreover, the used verification method is Assertion
Based Verification (ABV)7, 8 using System Verilog Assertion (SVA) with purpose of enhancing the level of confidence about the design.
Keywords: Turbo Code, 8-Level Paralellization, Dual Bank RAM, SystemC, ABV.
1. Introduction
Communication system has been developing since the
invention of Marconi’s wireless telegraphy to the digital
wireless communication system, which is commonly
used today. LTE, as the latest standard in the mobile
network technology of the Third Generation Partnership
28
Vol 19 No.4
Project (3GPP), represents a radical new step forward
in wireless industry and increases in bit rate with respect
to its predecessors by means of wider bandwidths and
improved spectral efficiency.9
The performance of the communication system
is measured by using data rate, Bit Error Ratio (BER),
and packet error rate. Several methodologies have been
Artikel Riset
developed to improve the performance of system near
the channel capacity of Shannon limit 10, 11, 12. Turbo Code
is one of the channels coding which is used to reduce
received error signals in LTE technology 13.
As described in technical specification of
1
LTE , the turbo encoder is composed of two Recursive
Systematic Convolution (RSC) encoders concatenated in
parallel. The number of data bits at the input of the turbo
encoder is K, where 40 ≤ K ≤ 6114. Data is encoded by
the first (i.e., upper) encoder in its natural order and by
the second (i.e., lower) encoder after being interleaved. At
the first, the two switches are in the up position as shown
in Fig. 1.
There are several problems in designing
architecture of Turbo Code for LTE. Turbo code
inherently has a large latency and low throughput due
to the iterative process. Hence, highly parallel decoding
architectures are required to achieve speed-up.
turbo decoder. Whereas, in the intention of implementing
the speed up turbo encoder, we also propose 8-level
parallelization for constituent encoder.
The second problem in the implementation of
high parallel turbo code is the interleaving delay. In order
to speed up that delay, 8-level parallel processing and dual
bank RAM is applied to generate index for interleaving
process.
In the previous cases, initially the implementation
hardware (HW) design is separated to software (SW), and
at the last step, both of them are combined together. As
the result, this kind of process takes long time. Therefore,
system level modeling using SystemC14 is applied in this
research to shorten the time. In this paper, SW indicates
the model as the reference of HW (RTL).
Verification is the most crucial phase in the
cycle of design. Along these lines, the most effective and
efficient verification method is greatly required in order to
save time, cost, and effort consumption. In this research,
ABV is implemented to verify.
The remainder of this paper is organized as
follows. In the second section, method and experiment
procedure is presented. In the next sections, the result is
presented along with the discussion. At last, section four
concludes the paper.
2. Method And Experiment
Procedure
Figure 1. Turbo Encoder [1]
This paper discusses several key problems in the
implementation of highly parallel turbo decoder. The
first problem is the tradeoff between the performance
and computational complexity of turbo decoder. The
fundamental architecture of turbo decoder used in this
paper is based on Valenti and Sun.6 In addition, the
component decoder used is Max-log-MAP algorithm2]
with channel model of Binary Phase Shift Keying (BPSK)
modulation along with Additive White Gaussian Noise
(AWGN) channel. However, one of the major problems
of this algorithm is the excessive memory requirement.
Therefore, the SWA is applied to increase the speed. We
propose 8-level parallel processing in implementation of
The first step of this research is to figure out the
specification of turbo code based on the technical
standard for LTE1 with purpose of understanding the
architecture. Afterward, the second step is to study the
existing algorithm and architecture as the reference of
the proposed architecture. Then, the third step is to study
about the design techniques used in this research, which
are system level modeling and verification. SystemC is
used as system level modeling environment and ABV is
used as verification method because of their advantages
14 7 8
.
Fourth step is to design the optimized architecture.
There are 8-level constituent encoders, 8-level index
generators for interleaver and deinterleaver, 8-level Maxlog-MAP and SWA for component decoders, and usage
of dual bank RAM in its top level architecture.
In final step, the optimized architecture is modeled
Vol 19 No.4
29
Artikel Riset
using SystemC. This functional model is verified by
directly porting the turbo encoder, a channel, and the
turbo decoder. The channel applied is BPSK channel
along with AWGN. The verified model is observed by
its BER performance. Afterward, the model is translated
into RTL HW using Verilog. This RTL is verified with the
functional model. Thereafter, the verification platform is
implemented by combining constrained random stimulus
input, assertion property, and its functional coverage.
3. Results And Discussion
3.1 The Proposed Turbo Code
The proposed turbo encoder refers to Fig. 1, and we
parallelize between the right and the left one, so that the
dual bank RAM is used as shown in Fig. 2. It works in 8-level
in parallel by processing 8 input bits instantaneously.
...(1)
Yet, the proposed index generator for interleaver or
deinterleaver is described in Eq. 2.
Figure 2. The Proposed Turbo Encoder
The proposed constituent encoder is described in Eq. 1.
30
Vol 19 No.4
Artikel Riset
...(2)
Both of equations are derived from original equatian
stated in standard [1]. The techniques are common term
reduction as shown in proof 1.
Whereas, top level architecture for turbo decoder is shown
in Fig. 3
The component decoder is combination of SWA
and Max-log-MAP algorithm. In the same way with the
turbo encoder, it works 8-level in parallel both for SWA
and Max-log-MAP algorithm with length block 40. The
dashed line shows backward recursion metric computation
...(3)
Figure 3. The Proposed Turbo Decoder
Figure 4. The Proposed SWA and Max-log-MAP algorithm
Vol 19 No.4
31
Artikel Riset
Figure 5. The BER performance, K = 200
Figure 6. The Timing Diagram
and the straight line shows forward recursion metric and
LLR.
The BER performance for K = 200 is shown in
Fig. 5 with the BER target is 10-5. This result describes the
32
Vol 19 No.4
desired condition. Therefore, the design is verified and
ready to be translated into HW RTL.
At the last, after implementing into HW RTL, Fig. 6 shows
the result for K = 6144. The figure also shows counter value
Artikel Riset
Reference
Technical Specification Group Radio Access Network; Evolved
Universal Terrestrial Radio Access (E-UTRA); Multiplexing
and channel coding (Release 9), 3GPP TS 36.212 v9.2.0, June
2010.
W. Koch and A. Baier, “Optimum and sub-optimum detection
of coded data disturbed by time-varying intersymbol
interference [applicable to digital mobile radio receivers],” in
Global Telecommunications Conference, 1990, and Exhibition.
’Communications: Connecting the Future’, GLOBECOM ’90.,
IEEE, dec 1990, pp. 1679 –1684 vol.3.
J. Erfanian, S. Pasupathy, and G. Gulak, “Reduced complexity
symbol detectors with parallel structure for isi channels,”
Communications, IEEE Transactions on, vol. 42, no. 234, pp.
1661 –1671, feb/mar/apr 1994.
Figure 7. The ABV Implementation
which count the match output value with input bits. The
figure describes that all of the output value are matched.
Fig. 7 shows the result of ABV implementation. It shows
that there are no fail properties. It is simple to conclude
that if the output HW matches with the model and there
are no fail properties, so that the design is totally verified.
4. Conclusion
This paper has discussed optimized turbo code VLSI
architecture for LTE using system level modeling and
assertion based verification. The main purposes of this
research are to reduce computational complexity and
excessive memory requirement as well as the latency and
delay. Besides the theory, this research also focuses on the
modern design method to shorten the design cycle and to
enhance the level of confidence. The problems are solved
in this paper by combining the parallel architecture, SWA,
Max-log-MAP algorithm, system level modeling, and
ABV.
J. Erfanian and S. Pasupathy, “Low-complexity parallelstructure symbol-by-symbol detection for isi channels,” in
Communications, Computers and Signal Processing, 1989.
Conference Proceeding., IEEE Pacific Rim Conference on, jun
1989, pp. 350 –353.
A. Viterbi, “An intuitive justification and a simplified
implementation of the map decoder for convolutional codes,”
Selected Areas in Communications, IEEE Journal on, vol. 16,
no. 2, pp. 260 –264, feb 1998.
M. C. Valenti and J. Sun, “The umts turbo code and an efficient
decoder implementation suitable for software defined radios,”
International Journal of Wireless Information Networks, vol. 8,
pp. 203–216, 2001.
H. Foster, A. Krolnik, and D. Lacey, Assertion-based design,
Kluwer Academic Publishers, 2004.
Thorsten Grotker, System design with systemc, Kluwer
Academic Publishers, Norwell, MA, USA, 2002.
EURASIP J. Wirel. Commun. Netw., vol. 2009, 2009.
J. Hagenauer and P. Hoeher, “A viterbi algorithm with
soft-decision outputs and its applications,” in Global
Telecommunications Conference, 1989, and Exhibition.
Communications Technology for the 1990s and Beyond.
GLOBECOM ’89., IEEE, nov 1989, pp. 1680 –1686 vol.3.
L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding
of linear codes for minimizing symbol error rate (corresp.),”
Information Theory, IEEE Transactions on, vol. 20, no. 2, pp.
284 – 287, mar 1974.
Vol 19 No.4
33
Artikel Riset
L. Hanzo, Turbo Coding, Turbo Equalisation and Space-Time
Coding for Transmission over Fading Channels. New York, NY,
USA: John Wiley & Sons, Inc., 2002.
R. H. Morelos-Zaragoza, The Art of Error Correcting Coding.
John Wiley & Sons, 2006.
Thorsten Grotker, System design with systemc, Kluwer
Academic Publishers, Norwell, MA, USA, 2002.
34
Vol 19 No.4
Artikel Riset
Pemenang 3 TICA Cluster II 2011
Evaluation of Fingerprint Orientation Field
Correction Methods
A. A. K. Surya1, A. S. Nugroho2
Faculty of Information Technology, Swiss German
University, Indonesia1
Agency for the Assessment and Application of
Technology (BPPT), Indonesia2
e-mail: [email protected],
[email protected]
Abstract
T
he estimation of fingerprint Orientation Field (OF) plays an important role in most fingerprint feature
extraction algorithms. Many of the later stages in fingerprint feature extraction process (e.g. ridge
enhancement, singular points detection) utilize fingerprint OF information as a cornerstone, thus
the far-reaching implication of its estimation to the whole recognition process. Unfortunately, the accurate and
robust estimation of fingerprint OF in low-quality fingerprint images is difficult and still remains as a challenge
until today. This research attempts to evaluate the effectiveness of two well-known fingerprint OF correction
methods. The method based on low-pass filtering and least square approximation. The experimental results
show that the classical filter-based method is very efficient in computation and has a comparable performance
to the more recent method based on least square approximation.
Keywords: fingerprint recognition, fingerprint orientation field, least square approximation, singular points
1. Introduction
By year 2012, the civil administration system of Indonesia
will adopt the use of Electronic National Identity Card
(E-KTP) and biometric recognition technology to
uniquely identify individual citizens. The adoption of
the new technologies is motivated by the weaknesses
adhered in the traditional civil administration system, e.g.,
possibilities for impostors to conduct identity frauds.
Currently, the implementation of the new civil
administration system in Indonesia mostly relies on
foreign products. For the purpose of achieving a better
technological maturity and autonomy of Indonesia as a
nation, a research on fingerprint recognition technologies
has been initiated by the Agency for the Assessment and
Application of Technology (BPPT), Indonesia. This
research is a contribution to the research conducted by
BPPT.
A fingerprint image is characterized by the
interleaved pattern of friction ridges of the skin and
the valleys between them (see Fig. 1). Two different
impressions of fingerprints can be compared by examining
the salient features of the ridge structure to determine
whether they are a match or distinct pair. The goal of the
fingerprint feature extraction task is therefore to extract
and describe a fingerprint image in terms of its salient
Vol 19 No.4
35
Artikel Riset
features.
According to 1, ridge characteristic can be examined
in a hierarchical order at three different levels. At the
global level (Level 1), ridge flow information can be used
to describe the global pattern of a fingerprint. Global
level features alone are often not sufficient to uniquely
discriminate between individuals. They are commonly
utilized for fingerprint classification or as an intermediate
step to derive other features.
Singular regions and fingerprint Orientation Field
(OF) are among the set of features that can be examined
at the global level. Singular regions are the landmarks
in a fingerprint image, characterized by the high ridge
curvature. Example of singular regions in a fingerprint
image can be seen in Fig. 1, in which the two types of
singular regions (loop and delta) are shown.
Loop
Delta
Figure 1: A Fingerprint Image and Its Orientation Field
Fingerprint OF is a matrix that encodes the ridge
flow orientation in a fingerprint image (see Fig. 1). The
estimation of fingerprint OF plays an important role in
most fingerprint feature extraction algorithms. Many of
the later stages in fingerprint feature extraction process
utilize fingerprint OF information as a cornerstone (e.g.
ridge enhancement, singular points detection), thus the
far-reaching implication of its estimation to the whole
recognition process.
Unfortunately, an accurate and robust estimation
of OF is difficult for low-quality fingerprint images. This
is a challenge encountered by any fingerprint recognition
system, especially if the system is deployed in developing
countries. For the case in Indonesia, many of the citizens
perform manual labor as an occupation, thus the presence
36
Vol 19 No.4
of scars and creases in the finger skin is fairly common.
Unreliable OF information can severely affect
the whole recognition process. For this reason, after
the coarse OF estimation has been obtained, a further
post-processing step is needed to correct the estimation
at corrupted regions. One of the most commonly used
approach for OF estimation correction is based on lowpass filtering, e.g. using a box filter.2
Most methods based on low-pass filtering are
cost-effective and relatively easy to implement. However,
because these methods are based on local information,
they are often ineffective against poor-quality fingerprint
images, in which incorrect orientation information may
dominate the correct ones. For this case, an OF correction
based on global mathematical modeling might perform
better.
An example of a fingerprint OF correction approach
based on mathematical modeling is proposed in 3, referred
to as Fingerprint Orientation Model based on 2D Fourier
Expansion (FOMFE). Based on the precomputed coarse
OF, this model can approximate the global fingerprint OF
using a trigonometric polynomial, whose parameters can
be derived from least square approximation.
The approach based on least square approximation
using Fourier series as proposed in 3 is of interest because
of its capability to approximate fingerprint OF without
requiring a prior detection of singular points position.
In this research, we attempt to evaluate the performance
of this OF correction approach in terms of speed and
effectiveness. As a comparison, the classical filter-based
approach is also implemented and discussed.
S tart
P reproces s ing
OF correction
S egmentation
S P detection
OF es timation
E nd
Figure 2: The Implemented Feature Extraction Algorithm
Artikel Riset
2. Feature Extraction Algorithm
Fig. 2 summarizes the feature extraction algorithm
implemented in this research. In this research, the
effectiveness of the implemented OF correction methods
are measured using singular points (SP) detection rate.
The SP detection approach implemented in this research
is based on Poincaré index information 4. This approach is
known to be sensitive to noises in the fingerprint OF, thus
the resulting SP detection rate can reflect the quality of
the estimated OF.
In the preprocessing stage, the input fingerprint
image is convolved with a Gaussian filter of size 5× 5.
This operation is intended to suppress small noises and
fingerprint pores, whose existence might reduce the
quality of the estimated OF.
The algorithm proceeds with a segmentation
process to discriminate between the regions of interest
in the fingerprint image from the background. After
segmentation, the focus of the feature extraction
algorithm can be narrowed down to the region of interest,
thus unnecessary computation can be avoided and
spurious SP detections can be reduced. In this system, the
segmentation process is performed by utilizing local graylevel intensity information.
The algorithm proceeds with the computation
of coarse OF. There are several approaches that can be
utilized to obtain the coarse OF, with the most prevalent
being the gradient-based approach, particularly because
of its computational efficiency and precision. In this
system, the computation of coarse OF is performed by
using the gradient-based approach as implemented in 5. In
order to reduce computational cost, instead of assigning
orientation information to each pixels in the image, the
OF is discretized into a set of non-overlapping blocks,
each of size 11×11 pixels.
After the coarse OF has been obtained, a further
post-processing step is needed to refine the estimation
at corrupted regions. In this research, two OF correction
approaches are implemented and compared: the approach
based on low-pass filtering (as implemented in 2) and least
square approximation (as implemented in 3).
For the approach based on low-pass filtering, a box
filter of size 3×3 is employed as the filter. The number of
filtering iterations is varied in order to demonstrate its
effect to the quality of the resulting OF.
Least square approximation is a systematic
approach to find an approximation model that describes
the general trend of the data. In the case of fingerprint
OF, the resulting approximation model can be utilized as
a noise reduction tool. In this research, bivariate Fourier
series are employed as the basis for the approximation
function. Fourier series is employed because of its
capability to approximate data with high-curvature
characteristic, which is the case for fingerprint OF. In this
research, the degree of expansion K of the series will be
varied in order to demonstrate its effect to the quality of
the resulting OF. For further technical details regarding
these two OF correction approaches, please consult the
corresponding publications, 2 and 3.
After OF correction, the algorithm proceeds with
the SP detection step. In this system, an approach based
on Poincaré index is utilized for the detection of SP.
Please consult 6 for further technical details regarding SP
detection based on Poincaré index. In a nutshell, based on
this information, the location of the loop and delta-type
SP can be determined.
3. Experimental Results
3.1 Performance Evaluation Approach
As previously mentioned, we use singular points detection
rate to measure the performance of the fingerprint
OF correction methods. In this research, we utilize
recall, precision, and F 1 as the metrics to measure the
effectiveness of an SP detection.
Recall R measures the proportion of the correctly
detected SP to the ground-truth SP. Precision P measures
the proportion of the correctly detected SP to the overall
detections.
TP
R= TP + TN
...(1)
P= TP+TPF P
...(2)
where TP (True Positive) is the number of SP
correctly detected, TN (True Negative) is the number
of missed SP, and F P (False Positive) is the number of
points falsely recognized as SP.
For the case of SP detection, R reflects the
accuracy and P reflects the noise-reduction capability.
These metrics can be summarized using F 1 , the
Vol 19 No.4
37
Artikel Riset
harmonic mean between recall and precision.
In this research, the locations of ground-truth
SP are manually labeled. We set the distance threshold
between a detected SP to ground-truth SP to be 20
pixels. A detected SP will be considered as a true positive
detection if its distance to the nearest ground-truth SP is
below the specified threshold.
The fingerprint images used in this experiment are
obtained from The Directorate General of Population and
Civil Registry, a subdivision under the Ministry of Home
Affairs of the Republic of Indonesia. The fingerprint
images are electronically captured using a live-scan
fingerprint sensor. All of the fingerprint images are of size
480×320 pixels.
In this experiment, 300 fingerprint images are
used as experimental data. Based on visual inspection,
these images are roughly divided into 2 categories (150
images for each category): good-quality fingerprint with a
clear overall ridge structure and poor-quality fingerprint,
characterized by scars, smudges, creases, etc.
Singular Points Detection Benchmark
We compared the SP detection rate for the coarse
OF and the OF corrected using the following two
methods: Filter-based method (with box filter in 1, 3, 5, 7,
and 9 iterations) and Approximation-based method (with
Fourier basis function and K set to 3, 4, 5, 6, and 7).
Fig. 3 illustrates the OF correction and SP detection
results for a poor-quality fingerprint image. The groundtruth SP and the detected SP are marked as O (red) and
X (blue), respectively. Table 1 summarizes the average
precision, recall, and F 1 for all sample fingerprints.
Several points can be deduced from the SP detection
results:
The filter-based method tends to shift the position
of the SP as the number of iterations increases (see the SP
shift from Filter-1 to Filter-9 in Fig. 3), resulting in more
missed SP. This statement is shown by the decreasing
precision and recall value of the filter-based method as the
number of iterations increases.
Filter-1 performs the best for good-quality
fingerprint and Filter-3 performs the best for poorquality fingerprint. This fact shows that a higher number
of iterations is needed in order to reduce noises in the OF
for poor-quality fingerprint image.
For good-quality fingerprint image, the
38
Vol 19 No.4
C oars e
F ingerprint
3.2 Experimental Data
F ilter-1
F ilter-5
F ilter-9
F ourier-3
F ourier-5
F ourier-7
Figure 3: Illustration of OF Correction and SP Detection Result
approximation-based method tends to increase the
performance as the degree of expansion K increases.
This fact shows that the higher K , the approximation
model can better preserve the position of the SP (see the
SP shift from Fourier-3 to Fourier-7 in Fig. 3).
For poor-quality fingerprint image, the
approximation-based method tends to increase the
performance up to K = 5 , and then the performance
declines. This is particularly caused by model-overfitting
problem, which reduce the noise-reduction capability of
the model (see the spurious SP detections for Fourier-7
Table 1: Average SP Detection Results
Method
Coarse
Filter-1
Filter-3
Filter-5
Filter-7
Filter-9
Fourier-3
Fourier-4
Fourier-5
Fourier-6
Fourier-7
P
Good-quality
0.66
0.93
0.89
0.81
0.69
0.58
0.72
0.86
0.91
0.92
0.92
R
0.95
0.93
0.86
0.77
0.63
0.53
0.68
0.84
0.90
0.93
0.95
F1
0.73
0.92
0.86
0.78
0.64
0.54
0.69
0.84
0.89
0.90
0.91
P
Poor-quality
0.19
0.70
0.80
0.74
0.67
0.60
0.67
0.79
0.79
0.78
0.73
R
0.94
0.91
0.82
0.71
0.65
0.57
0.69
0.84
0.88
0.92
0.92
F1
0.28
0.74
0.80
0.71
0.65
0.56
0.67
0.79
0.81
0.81
0.77
Artikel Riset
References
in Fig. 3).
3.4 Speed Benchmark
In this experiment, the speed test is conducted on a
notebook PC with Intel Core 2 processor, running
Ubuntu Linux 11.04 as the operating system. The
fingerprint feature extraction system is written using C++
programming language. The speed test is conducted by
observing the time required to finish the execution of the
feature extraction system.
Table 2: Average Execution Time
Method
Coarse
Filter-3
Filter-4
Filter-5
Filter-6
Filter-7
Time (s)
Method
Time (s)
0.024
0.024
0.027
0.025
0.027
0.023
Fourier-3
Fourier-4
Fourier-5
Fourier-6
Fourier-7
0.112
0.169
0.241
0.335
0.446
Table 2 summarizes the average execution time
for each OF correction methods. From the table, it can
be seen that the computational effort for the filter-based
method is almost negligible. This is particularly caused
by the small size of the matrix on which the filtering
operation is applied (approximately 43×29 elements).
The method based on least square approximation, on the
other hand, is very expensive in terms of computation. This
is particularly caused by the large number of operations
needed to compute the regression coefficients.
Maltoni D, Maio D, Jain AK, and Prabhakar S. Handbook of
Fingerprint Recognition, 2nd ed. Springer, 2009.
Hong L, Wan Y, and Jain AK. Fingerprint image enhancement:
Algorithm and performance evaluation. IEEE Transactions on
Pattern Analysis and Machine Intelligence. 1998. 20(8): 777789.
Wang Y, Hu J, Phillips D. A fingerprint orientation model
based on 2D Fourier expansion (FOMFE) and its application
to singular-point detection and fingerprint indexing. IEEE
Transactions on Pattern Analysis and Machine Intelligence.
2007. 29: 573-585.
Kawagoe M and Tojo A. Fingerprint pattern classification.
Pattern Recognition. 1984. 17(3): 295-303.
Liu M, Jiang X, and Kot AC. Fingerprint reference-point
detection. EURASIP Journal on Applied Signal Processing.
2005. 4: 498-509.
Jain AK, Prabhakar S, and Hong L. A multichannel approach to
fingerprint classification. IEEE Transactions on Pattern Analysis
and Machine Intelligence. 1999. 21(4): 348-359.
4. Conclusion
The OF correction method based on low-pass filtering
has a comparable performance to the method based least
square approximation as long as the number of iterations is
kept to be small. In this experiment, the best performance
is achieved with one iteration for good-quality fingerprint
image and three iterations for poor-quality fingerprint
image.
Considering its computational efficiency, we
recommend the use of the OF correction method based
on low-pass filtering if the feature extraction module
is to be installed in a real-time, large-scale Automated
Fingerprint Identification System (AFIS).
Vol 19 No.4
39
Artikel Riset
Nominator TICA Cluster II 2011
Algorithm and Architecture Design of Soft
Decision Demapper for SISO DVB-T by Using
Quadrant K-Best
Probo Aditya N.I., Trio Adiono
Electrical Engineering Department,
Institut Teknologi Bandung
e-mail: [email protected], [email protected]
Abstract
In this paper, we explain how to design the demapper for SISO DVB-T system. This soft decision method uses
Log Likelihood Ratio algorithm to find error constellation because of AWGN noise. In this research, a new
algorithm of likelihood detection for SISO system was found and called Quadrant K-Best (QK-Best). The
algorithm focuses on quadrant to select the constellation point which belongs to it. Functional simulation
of QK-Best algorithm uses Rayleigh channel model and AWGN for noise model. Architecture of DVB-T
demapper was constructed based on QK-Best Algorithm and called Speaker Architecture. It has a working
frequency of 39.19 MHz and latency of about 23 clock cycles. Speaker Architecture can also be implemented
into Cyclone III FPGA series EP3C120F780C7N and solve the problems in 3 modulation modes (QPSK,
16QAM, and 64QAM).
Key words: QK-Best, Demapper, DVB-T, Speaker architecture
©2011. Persatuan Pelajar Indonesia Jepang. All rights reserved.
1. Introduction
Nowadays, digital TV becomes the major type of
television and replace the domination of analogue TV
in the market. DVB-T is one of the digital TV standards,
and has two parts in its system, namely transmitter and
receiver. When a data has been sent into a channel, it
will be corrupted by noise from the channel. In DVB-T
receiver, there are many blocks in the receiver to recover
the data from noise or fading. Demapper has the role key
to recover the modulation signal from the noise, which is
transmitted through the medium.
40
Vol 19 No.4
To build the module of DVB-T demapper, a
likehood algorithm is needed. Maximum Likelihood
Detector (MLD) is the first algorithm to solve the
demapper problems by calculating the error distance
using all constellation points on diagram. Due to the lack
of efficient computation from MLD, K-Best algorithm
was developed to reduce the computation. This algorithm,
however, only operates in Multi Input Multi Output
(MIMO) system. Since DVB-T is Single Input Single
Output (SISO) system, a new K-Best algorithm made for
SISO system is being developed, namely the Coordinate
K-Best (CK-Best). CK-Best algorithm has the orientation
Artikel Riset
to determine the window of constellation point by looking
for the nearest coordinate of data.
Equation (2) can be re-written as:
2. Method and experiment procedure
QK-Best is a new algorithm to solve the demapper
problem. This algorithm uses quadrant to select the
constellation point for effectiveness. First step of the
algorithm is to search the real constellation point location
of received data by dividing the received symbol with
channel response. The result is a complex number with
the data location information.
… (3)
H divider in Equation (3) is then changed to constant
as follows:
… (4)
This research began by algorithm exploration and design,
continued by making MATLAB model to ensure the
validity of QK-Best algorithm and then designing the
architecture using the algorithm. An RTL design was
made using Verilog language to test the demapper design.
2.1 QK-Best Algorithm
… (2)
Equation (4) is multiplied by quadratic constant H
and written as:
… (5)
•
Simplified LLR
LLR equation begins by logarithmic scale comparison
between the alternative and null models:
… (6)
With assumptions that all symbols are equal and
using the Bayes Law, Equation (6) becomes:
… (7)
x
x
x
x
x
x
Figure 1. Quadrant location to determine the constellation point
Next equation is made from density probability
function for AWGN noise:
… (8)
Figure 1 shows the probability in Quadrant I,
Quadrant II, Quadrant III and Quadrant IV regions.
Constellation point in the quadrant where data is occurred
will be used and the other quadrant constellation points
will be discarded.
Noise components are in-phase noise and quadrature
noise which are assumed to be independent to each
other:
… (9)
2.2 Equations
Equation (8) and (9) are then combined:
... (10)
To reduce the hardware complexity, manipulation using
mathematical equations are needed. The manipulation
was divided into two parts of modified euclidean distance
and simplified Log Likehood Ratio (LLR).
• Modified Euclidean Distance
Euclidean equation is used in the Distance Calculation
module on RTL design. Basic equation of demapper
can be written as:
… (1)
Y=HS+n
where Y is received symbol, H is channel fading, S is
ideal constellation point, and n is AWGN.
And the combination of Equation (7) and (10)
resulted in:
... (11)
Because of logarithm and sigma, RTL design is
difficult to be implemented, and the approximate
equation is used:
… (12)
Equation (11) and Equation (12) can be combined
into a simplified equation of LLR
Vol 19 No.4
41
Artikel Riset
… (13)
This equation is ready to be used to make the output
system becomes a soft type.
2.3 Speaker Architecture
Next step was designing the architecture which combines
QK-Best algorithm and LLR algorithm. The architecture
must be pipelined for better frequency. The proposed
architecture is shown in Figure 2 below.
Figure 2 shows the Speaker Architecture which
has the form of a speaker. First processing data comes
from YH* Multiplier as in Equation (4) to prepare input
data for quadrant detector. Quadrant detector module
has the function to determine the quadrant and give the
next module a quadrant signal to generate the index. The
next process is data calculation in the distance calculation
module. After that, all of the complex error are changed
to polarity form which has angle and magnitude. All
magnitude of error are distributed depends on each bit and
probability, and then the minimum value for each of them
are decided. The value of probability zero is subtracted by
the value of probability one. The result is a soft decision
output which can be changed into hard or soft output.
rst
clk
rst
clk
rst
clk
rst
clk
y_real
y_imag
h_real
h_imag
constl
mode_out
rst
clk
x_real
Y H*
Multiplier
rst
clk
x_imag
hd
Quad
Detector
bit 25
generator
error_real_25
error_imag_1
Calculation
Distance
constl
mode_out
rst
clk
error_real_1
ind 1
ind 25
rst
clk
rst
clk
rst
clk
0
enable
Root
Square
error_imag_25
1
0
rst
clk
0
1
rst
clk
rst
clk
rst
clk
rst
clk
rst
clk
rst
clk
rst
clk
Fd
60
Min
Search
rst
clk
Fd
61
bit
LLR
Min
Search
Min
Search
rst
clk
Fd
51
bit
LLR
Vol 19 No.4
0
SO
1
HO
Min
Search
0
SO
1
HO
0
SO
1
rst
clk
Fd
40
Min
Search
rst
clk
Fd
41
rst
clk
Fd
30
bit
LLR
Min
Search
Min
Search
rst
clk
Fd
31
bit
LLR
Min
Search
HO
0
rst
clk
Min
Search
Fd
20
bit
LLR
rst
clk
Fd
21
Min
Search
SO
1
HO
0
SO
1
HO
0
rst
clk
Min
Search
Fd
10
bit
LLR
rst
clk
Min
Search
Fd
11
rst
clk
rst
clk
0
42
1
HO
rst
clk
Fd
50
0
Figure 2. Speaker Architecture
SO
1
0
0
1
data valid
Artikel Riset
3. Results and discussion
To know the performance of the algorithm and RTL
design, a test was conducted and divided into two parts
of simulation test. The first test used MATLAB software
to find the performance of QK-Best algorithm. The
performance of QK-Best was indicated by referencing the
Bit Error Rate (BER) between that of MLD and QK-Best.
The results are shown in Table 1.
Figure 4 shows that the Speaker Architecture had
a working frequency of about 39.19 MHz and critical path
of 25.518 ns. This architecture can be clocked less than or
equal to 39.19 MHz.
Table 1. Performance QK-Best in Three Modulations
Table 2. Computation of Each Algorithm
Modulation
Total Data
Total Error
64QAM
514,080
2,735
16QAM
616,896
1,752
QPSK
822,528
444
Referring to Table 1, the performance of QKBest can be calculated by dividing the total data with the
total error multiplying symbol. BER of each modulation
(QPSK, 16QAM, 64QAM) as a soft output had the values
of about 2.7 10-4 (QPSK), 7.1 10-4 (16QAM) and 8.9 10-4
(64QAM), while their hard output did not have any error
bit.
Figure 4. Report timing of Speaker Architecture
Computation of Algorithm
Modulation
MLD
CK-
CK-Best
QK-
Best
LLR
Best
4
4
1
QKBest
LLR
QPSK
4
4
16QAM
16
16
16
4
9
64QAM
64
31
40
16
25
Table 2 shows the comparison of algorithm
computation for each modulation. QK-best algorithm
showed the lowest computation of 75% to that of MLD.
4. Conclusion
In this paper, it was shown that the QK-Best
architecture theoretically reduces the computation
by 75% to that of MLD in three modulation modes of
64QAM, 16QAM and QPSK. Matlab model of QK-Best
exhibited a good performance which could be seen from
its acceptable BER of 2.7 10-4 (QPSK), 7.1 10-4 (16QAM)
and 8.9 10-4 (64QAM). The Speaker architecture has the
working frequency of about 39.19 MHz and 23 clock
cycles latency. Throughput demapper of the system was
1 data/clock.
Figure 3. Comparison of MATLAB and ModelSim simulation data in
16QAM mode
Figure 3 shows the simulation comparison of
16QAM modulation between MATLAB and ModelSim
in hard output. It can be seen that there was no difference
between them.
Vol 19 No.4
43
Artikel Riset
References
Altera Corp, Constellation Mapper and Demapper for
WiMAX, Application Note 439, 2007.
Axler S. Algebra and Trigonometry. John Wiley & Son,
Inc. 2011.
Ciletti M. Advanced Digital Design with the Verilog HDL.
Pearson Prentice Hall. 2003.
Haifang J. VLSI architecture of a K-Best detector for
MIMO-OFDM wireless communication systems. Journal
of Semiconductors. 2009. Vol. 30(7).
Haykin S. Communication System, 4th Ed. John Wiley &
Son, Inc. 2004.
Murthy JVK, Chockalingam A. Log-Likelihood Ratio
Based Optimum Mappings Selection of Symbol Mapping
Diversity with M-QAM. 2005.
Su K. Efficient Maximum Likelihood Detection for
Communication Over Multiple Input Multiple Output
Channels. Trinity Hall. 2005.
Walpole R. Probability & Statistics for Engineers &
Scientist, 8th Ed. Pearson Prentice Hall. 2006.
Versatile Inc. CK-Best untuk Sistem OFDM SISO dan
MIMO. Bandung: 2010.
44
Vol 19 No.4
Artikel Riset
Nominator TICA Cluster II 2011
Application of Adaptive Neuro Fuzzy
Inference System (ANFIS) for Lung Cancer
Detection Software
Sungging Haryo W.1, Sylvia Ayu P.2, M. Yusuf
Santoso3, Syamsul Arifin4
Department of Engineering Physics,
Sepuluh Nopember Institute of Technology,
e-mail: [email protected], [email protected],
[email protected], [email protected]
Abstract
L
ung cancer is one of the leading cause of death. Symptoms of the disease usually do not appear until
the disease has progressed. Thus, early detection is not simple. Chest X-Ray test usually will be the
first test for any masses or spots on the lungs. This paper presented medical prognosis using Adaptive
Neuro Fuzzy Inference Systems (ANFIS) to predict lung cancer. Medical history such as characteristic data
and pulmonary x-ray image of suspected patient are used as input of prediction software with two and three
membership functions. After, it is validated by comparing training and testing results with doctor analysis.
Moreover the results indicate that the software has detection performance with 96% accuracy in medical history prediction and 90% in imaging test.
Keywords: Lung Cancer, ANFIS, Diagnosis, X-Ray
1. Introduction
Lung cancer is a disease characterised by uncontrolled
cell growth in tissues of the lung.1 It is also the most
preventable cancer.2 Cure rate and prognosis depend on
the early detection and diagnosis of the disease.3 Lung
cancer symptoms usually do not appear until the disease
has progressed. Thus, early detection is not easy. Many
early lung cancers were diagnosed incidentally, after
doctor found symstomps as a results of test performed
for an unrelated medical condition. Lung Cancer can be
detected from an X-Ray (Rontgen) Scan. Scan results are
converted into digital data using image processing. This
digital data will also be supported with doctor analysis for
diagnosis. The Doctor has a key position as human expert
for determining the basic rules and diagnosing lung cancer.
During diagnosis process, subjectivity of the doctor is one
of important obstacles. It is noteworthy that the decision
of the doctor is related to the previous diagnosis. Then,
to gain the precise diagnosis and interpret the x-ray scan
accurately, previous input and output data diagnosis
should be automated and used effectively. This research is
aimed at designing an artificial intelligence system based
on Adaptive Neuro Fuzzy Inference System for Lung
Cancer Diagnosis.
Vol 19 No.4
45
Artikel Riset
1.1 Lung Cancer Classification
Lung Cancer is classified into two groups; i.e. Primary
Lung Cancer and Secondary Lung Cancer. Primary Lung
Cancer itself is further classified into two main types :
(1) Non-Small Cell Lung Cancer (NCLC) and (2) Small
Cell Lung Cancer (SCLC). SCLC is a type of small cell in
large quantity, with a rapid cell growth. Some instances of
medial therapy for SCLS are Chemotherapy and Radiation
Therapy. On the other hand NSCLC is a singular cell
growth, but this type of cell often might attack more than
one lung area. Secondary Lung Cancer is emerged by the
effect of cancer from other organs. Usually started with
breast cancer and/or intesties cancer, then it is spreaded
through blood circulation, speele system, or closect organ
effect.4-6
1.2 Lung Cancer Diagnosis Process.
The Process of lung cancer diagnosis depends on several
factors : i.e. medical history (personal smoking and
secondary expssure, past problem lungs record, current
symptoms, activity background, and family history) 7
and physical examination (fever, strange breath sounds,
swollen lymph nodes, liver enlargements, hand/ feet/
face/ anklesswelled, changing of skin pigmen, muscle
weakness). The results of each step influence the next step
in the process. Considered on previous factors (medical
history, presenting symptoms, and physical examination).
Laboratory and imaging test will be included in the further
diagnosis for patient who is suspected with lung cancer.
a. Laboratory Testing
The accuracy of cancer diagnosis based on cytology
sputum examination is determined by specimen
collecting method and the type and size of tumor.
Overall results cytological examination may establish
the diagnosis up to 90%.
b. Imaging Test
Imaging test is performed to determine if a cancer cell
is present. Although progress has been made in field of
radiology in scanning as a CT-Scan and MRI scan, a
chest x-ray is often the first imaging study performed
when primary or metastatic lung cancer is suspected.
Objective of lung cancer staging is to help
doctors determine the best treatments that are likely
effective. It also helps to determine what course of the
prognosis is likely to be. Lung cancer stage ranges from I
to IV. TNM classification systems are used to determine
46
Vol 19 No.4
lung cancer stage. TNM expressed the factor as folows:
T (tumor characteristics including size, location, and
local invasion), N (regional lymph node involvement),
M (metastasis status).8
1.3 ANFIS
The ANFIS is the abbreviation for adaptive neuro-fuzzy
inference system. Actually, this method is like a fuzzy
inference system with a back propagation that tries to
minimize the error. The performance of this method is like
both Artificial Neural Network and Fuzzy Logic.10 In both
ANN and FL case, the input passes through the input layer
(by input membership function) and the output could be
seen in output layer (by output membership functions).
Since, in this type of advanced fuzzy logic, neural network
has been used. Therefore, by using a learning algorithm
the parameters are changed until reach the optimal
solution. Actually, in this type the FL tries by using the
neural network advantages to adjust its parameters.11
From Figure 1 above, neuro-fuzzy systems consist
Figure 1 ANFIS structure
of five layers with different function for each layer. One
layer is constructed from several nodes represented by
square or circle. The Square symbolizes adaptive node.
It means that value of parameter can be changed by
adaption. While circle is non-adaptive node and has a
constant value. 12 Equations for each value are described
below :
a. First Layer :
All nodes in first layer are adaptive nodes (changed
parameter), node function for first layer is :
(1)
O1,i = μAi (x)
1,2
for i =
Artikel Riset
(2)
O1,i = μBi-2 (y)
i = 3, 4
for
Where x and y are input of node i, Ai or Bi-2 are
membership functions of each input concerning fuzzy
set of A and B. Used membership function is generalized
bell type (gbell).
b. Layer 2
All nodes in this layer are non-adaptive (fixed
parameter). Node function of second layer is :
O2,i = wi = μAi (x) . μBi (x), i = 1,2
(3)
Each output stated the firing strength of each
fuzzly rule. This function can be expanded when the
premises consist more than two fuzzy sets.
c. Layer 3
All nodes in layer 3 are non-adaptive type which show
normalized firing strength function. Output ratio at
node-i from previous layer toward all previous output.
Node function of layer 3 is :
O3,i =
=
, for i = 1,2
d. Layer 4
Each node in layer 4 is adaptive node with node
function as follows:
fi =
(pix + qiy + ri )
(5)
Where w is normalized firing strength from
layer 3 and p, q, and r parameters represent adaptive
consequents paramaters
e. Layer 5
In this layer, there is only one fixed node for summing
all input, function of layer 5 is:
O5,i =Σ
fi = Σ
2. Method And Experiment
Procedure
2.1 Data Collection
The Data of this research are divided into two types: (1)
characteristic data and (2) x-ray data. The Characteristic
data is used to identify patient’s lung cancer risk level.
While x-ray data is used in further diagnosis of a suspected
patient. Characteristic data consists of information about
normal and infected patient. There are four variables in
characteristic data for identification: amount of cigarette
consumed per day, duration of smoking, occupation,
and cough. While the x-ray data is classified into two
groups: (1) normal x-ray and (2) positive x-ray. Before
the x-ray data used as input in the software, it needs a
preprocessing.
2.2 Pre-Processing
(4)
if more than two membership functions are
constructed, function can be expanded by dividing
with total number of w for all rules
O4,i =
Adaptive network with five layers is equivalent
with fuzzy inference systems of Takagi Sugeno
Kang.13,14
(6)
(a)
(b)
Figure 2 X-Ray image of (a) normal lungs and (b) lung cancer
Figure 2 shows that x-ray image of positive lung
cancer has wider white area than normal lung. From
this phenomenon, mean of x-ray image color can be
determined with the following steps:
a. Scanning
The purpose of scanning is to convert the original
data to digital data. In the process of scanning, X-ray
image of lung is separated into left and right lung. This
process is aimed to see the average detail of each side of
the lung. Once the data is transformed into digital data.
The pixel of the image is arranged in such a way that
it can be used as the input software. Permitted size on
software is 160 x 80 pixel.
b. Grey scaling
Vol 19 No.4
47
Artikel Riset
Output from scanner can be loaded on software and
detected as matrix. It will appear in the software matrix
colour scale of x-rays. At this stage, grey scaling is
needed to facilitate the computation on software by
dividing RGB with 3.
c. Normalization
Normalization is the process of dividing all grey scaling
value matrix with the largest value of the matrix. It aims
to make all images input in the software has the equal
size though brightness levels from different input so that
the mean results may apply to all image. All numbers
are in matrix normalization ranged from 0 to 1
d. Calculation of the average color
After normalization matrix result obtained, the next
step is taking a parameter that represents the matrix
image. Normalization matrix is a matrix that contains
a series of normalized colours. Based on the fact that
lung cancer image has more pixels than normal, it
can be determined by averaging the normalization
matrix. Average value used as ANFIS input at diagnosis
stage to predict whether a patient has cancer or not.
White has the highest value of 255. Meaning that if
more white colour exists on x-ray images, average of
normalization matrix is larger and the result detection
is more positive.
2.3 ANFIS Model Design
The design model is obtained using MATLAB to
determine the most suitable premise and consequent
parameters applied to the software. Each input is divided
into approaches with 2 and 3 membership functions.
For each membership function, the data training
process is obtained. Training data aims to find the
smallest error. After small error is collected, then the
premise and consequent parameters canibe determined
from each membership function in the FIS Editor.
2.4 Software Design
Software is designed to facilitate patients and doctors
to identify lung cancer. In accordance with the order of
examination, the software is divided into three phases :
screening, diagnosis, and stagging.
a) Filtering Phase Software
Filtering software is used to observe the influence level
of the patient’s habit against lung cancer. This data uses
characteristic data as examination parameter. ANFIS
48
Vol 19 No.4
software is utilized in this process
b) Diagnosis Phase Software
Diagnostic software consists of three main phases: (1)
image loading; (2) initial diagnosis; and (3) advanced
diagnosis. Negative or positive value of cancer in the
lung determined using ANFIS.
c) Stagging phase software Stagging is the final step in staging cancer examination.
By stagging phase doctor’s recommendations can be
drawn.
2.5 Software Validation
The aim of the software validation is to measure the
accuracy of the software to predict lung cancer. The
method used to validate the software is by comparison
between doctor’s diagnosis and the results of the
prediction software.
3. Result And Discussion
In constructing filtering software for medical history
data, two membership functions and three membership
functions are applied. Software training is needed to
determine smallest error for each membership function.
For two membership functions, 16 basic rules constructed
from four inputs and two clusters. Errors are steady and
reached 2,9061 x 10-5 after 500 epoch training. While three
membership functions have 81 rules. Then, 100 epochs
for training are selected. In 100t epoch, the smallest error
obtained is 3,9333 x 10-7.
In diagnosis software, two inputs are clustered
into 2 and 3 membership functions. Blue node at figure 5
below represents the number of rule. For two membership
functions smallest error was reached at 0,14755 from 1000
epoch. While in three membership function 0,12734 was
obtained as smallest error. Both filtering and diagnosis
software are trained to maintain premise and consequent
parameter. In filtering software, 50 data are used for
two and three membership functions. Based on the
software validation results, filtering training data with two
membership function has 100% accuracy compared with
doctor diagnosis (data expert). On the other hand, three
membership function training data is 98% accurate. Thirty
five samples were processed in diagnosis data training. For
two membership functions, three test samples showed
different result with doctor diagnosis. Three membership
Artikel Riset
function showed better performance with only 1 error
from 35 samples.
In the testing process, both software validated
by comparing between prediction of the software and
data expert. In filtering software, from 25 samples, there
were two errors in 2 membership functions. Better results
obtained in three membership functions with only 1
error. Twenty samples were tested and compared in
diagnosis software. After it was validated, software with
two membership functions has better performance with
two errors from 20 tests, while 3 errors existed in software
with three membership functions. The result of validation
training and testing data shown in table 1 below.
Table 1 Result of training and testing validation software
DATA
Medical
TRAIN
History
TEST
X-Ray
TRAIN
Scan
TEST
MF
RMSE
VAF
(%)
2
3
2
3
2
3
2
3
0,49921
0,60747
0,41548
0,34804
0,36419
0,29113
0,25199
0,27523
52,3337
57,3042
66,6048
54,1573
48,8421
56,5538
65,4569
62,4914
100
98
92
96
91,4
97,2
90
85
RMSE is used for measuring the differences
between values predicted by a model or an estimator and
the values actually observed from the thing being modeled
or estimated. All data with RMSE less than 0,5 are well
precised.
4. Conclusion
The current research has been carried out for developing
powerful grade estimation tool for lung cancer diagnosis.
By using Adaptive Neuro Fuzzy Inference Systems was
resulted. The software is proven to be able to provide the
actual model by using both neural systems and fuzzy logic.
Based on the experiment, filtering prediction performed
the best performance with 100% accuracy in training and
96% in testing. For diagnosis prediction, the best training
accuracy was 97,2 % and 90 % accuracy for testing.
References
American Society of lung cancer. Lung Cancer Non-Small Cel
Overview. American Cancer Society. 2011: 1.
American Cancer Society. Lung Cancer. URL:http://cancer.
org accesed on July 30, 2011
Anonymous. Kanker Paru Pedoman Diagnosis dan
Penatalaksanaan di Indonesia. Perhimpunan Dokter Paru
Indonesia. 2003 : 2.
Balachandran K, Anitha R. Supervised Learning Processing
Techniques for Pre-Diagnosis of Lung Cancer Disease.
International Journal of Computer Applications. 2010, 1(4):
17
Floche. Background information Non-small Lung Cancer.
UR L :htt p://w w w.roche.co.id/f mf i les/re7229001/
Indonesian/media/background.library/oncology/lc/Lung.
Cancer.Backgrounder.pdf/ accesed on July 30, 2011
American Society of Clinical Oncology. Guide to Lung Cancer.
Alexandria. Conquer Cancer Foundation. 2011: 2.
Dhillon D Paul, Snead David RJ. Advanced Diagnosis of Early
Lung Cancer. 2007 : 57
Reeve Dana. NCCN Guide Line for Patient. National
Comperhensive Cancer Network. Fort Washington. 2010 :
9-11.
. Mountain Clinton F. Stagging Classification of Lung Cancer
A Critical Evaluation. Clinic In Chest Medicine. 2002. 23(1):
104-107
Kadir Abdul. Identifikasi Tiga Jenis Bunga Iris menggunakan
ANFIS. Forum Teknik. 2010, 3(1): 10
Kakar M, et all. Respiratory Motion Production by Using
Adaptive Neuro Fuzzy Inference Systems (ANFIS). Institute of
Physics Publishing. 2005, 50 : 4722.
Cruz Adriano. ANFIS : Adaptive Neuro Fuzzy Inference
Systems. Mestrado NCE. 2006 : 6.
Chandana Sandep, Mayorga V Rene, Chan Christine W.
Automated Knowledge Engineering. International Journal of
Computer and Information Engineering. 2008, 2(6) : 373
Diah Iradiatu. Perbandingan antara ANFIS dan Observer
Neural Network untuk Estimasi Kecepatan Motor Induksi Tiga
Fasa. Jurnal Sains dan Teknologi. 2008, 6(2)
Vol 19 No.4
49
Diterbitkan oleh Persatuan Pelajar Indonesia Jepang
Website: http://io.ppijepang.org