Skip to content

Commit

Permalink
Add pages for volume v245
Browse files Browse the repository at this point in the history
  • Loading branch information
lawrennd committed Jul 29, 2024
0 parents commit a4182e1
Show file tree
Hide file tree
Showing 52 changed files with 2,963 additions and 0 deletions.
15 changes: 15 additions & 0 deletions Gemfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
source "https://rubygems.org"

git_source(:github) {|repo_name| "https://github.com/#{repo_name}" }

gem 'jekyll'

group :jekyll_plugins do
gem 'github-pages'
gem 'jekyll-remote-theme'
gem 'jekyll-include-cache'
gem 'webrick'
end

# gem "rails"

22 changes: 22 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# PMLR 245

To suggest fixes to this volume please make a pull request containing the changes requested and a justification for the changes.

To edit the details of this conference work edit the [_config.yml](./_config.yml) file and submit a pull request.

To make changes to the individual paper details, edit the associated paper file in the [./_posts](./_posts) subdirectory.

For details of how to publish in PMLR please check https://proceedings.mlr.press/faq.html

For details of what is required to submit a proceedings please check https://proceedings.mlr.press/spec.html



Published as Volume 245 by the Proceedings of Machine Learning Research on 29 July 2024.

Volume Edited by:
* Zeng Nianyin
* Ram Bilas Pachori

Series Editors:
* Neil D. Lawrence
80 changes: 80 additions & 0 deletions _config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
---
booktitle: Proceedings of 2024 International Conference on Machine Learning and Intelligent
Computing
shortname: MLIC
volume: '245'
year: '2024'
start: &1 2024-04-26
end: 2024-04-28
published: 2024-07-29
layout: proceedings
series: Proceedings of Machine Learning Research
publisher: PMLR
issn: 2640-3498
id: MLIC-2024
month: 0
cycles: false
bibtex_editor: Nianyin, Zeng and Pachori, Ram Bilas
editor:
- given: Zeng
family: Nianyin
- given: Ram Bilas
family: Pachori
title: Proceedings of Machine Learning Research
description: |
Proceedings of 2024 International Conference on Machine Learning and Intelligent Computing
Held in Wuhan, China on 26-28 April 2024
Published as Volume 245 by the Proceedings of Machine Learning Research on 29 July 2024.
Volume Edited by:
Zeng Nianyin
Ram Bilas Pachori
Series Editors:
Neil D. Lawrence
date_str: 26--28 Apr
url: https://proceedings.mlr.press
author:
name: PMLR
baseurl: "/v245"
twitter_username: MLResearchPress
github_username: mlresearch
markdown: kramdown
exclude:
- README.md
- Gemfile
- ".gitignore"
plugins:
- jekyll-feed
- jekyll-seo-tag
- jekyll-remote-theme
remote_theme: mlresearch/jekyll-theme
style: pmlr
permalink: "/:title.html"
ghub:
edit: true
repository: v245
display:
copy_button:
bibtex: true
endnote: true
apa: true
comments: false
volume_type: Volume
volume_dir: v245
email: ''
conference:
name: Machine Learning and Intelligent Computing
url: https://www.icmlic.org
location: Wuhan, China
dates:
- *1
- 2024-04-27
- 2024-04-28
analytics:
google:
tracking_id: UA-92432422-1
orig_bibfile: "/Users/neil/mlresearch/v245/mlic24.bib"
# Site settings
# Original source: /Users/neil/mlresearch/v245/mlic24.bib
52 changes: 52 additions & 0 deletions _posts/2024-07-29-chao24a.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: Research on Features Extraction and Classification for Images based on Transformer
Learning
abstract: "Image processing and analysis have become an essential method in many areas
including medical impact, facial recognition, and social media analysis. With the
rapid development of big data and artificial intelligence technology, especially
the emergence of Transformer learning models, new methods have been brought to image
feature extraction and classification. However, the existing transformer model limits
the ability to handle variable-length sequences and understand complex sequence
relationships. In this work, we propose a novel transformer-based framework that
combines a self-attention mechanism and a multi-head attention technique to efficiently
extract features from complex image data. In addition, we introduce an improved
classifier that enables efficient image classification using extracted features.
Our method takes into account not only the local features of the image but also
the global relationships between different regions to achieve a more accurate representation
of the features. We simulate our model with existing convolutional neural networks
and other traditional machine learning methods in the public datasets including
CIFAR-10 and MNIST. From our experimental results, we can observe that our transformer-learning-based
framework shows significant performance improvement in image feature extraction
and classifica-tion tasks.\r "
layout: inproceedings
series: Proceedings of Machine Learning Research
publisher: PMLR
issn: 2640-3498
id: chao24a
month: 0
tex_title: Research on Features Extraction and Classification for Images based on
Transformer Learning
firstpage: 67
lastpage: 75
page: 67-75
order: 67
cycles: false
bibtex_author: Chao, Wang
author:
- given: Wang
family: Chao
date: 2024-07-29
address:
container-title: Proceedings of 2024 International Conference on Machine Learning
and Intelligent Computing
volume: '245'
genre: inproceedings
issued:
date-parts:
- 2024
- 7
- 29
pdf: https://raw.githubusercontent.com/mlresearch/v245/main/assets/chao24a/chao24a.pdf
extras: []
# Format based on Martin Fenner's citeproc: https://blog.front-matter.io/posts/citeproc-yaml-for-bibliographies/
---
49 changes: 49 additions & 0 deletions _posts/2024-07-29-chunning24a.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
title: A Sound Source Location Method Based on Time Difference of Arrival with Improved
Dung Beetle Optimizer
abstract: In microphone array sound source localization based on Time Difference of
Arrival (TDOA), traditional methods for solving the nonlinear equations of TDOA
lead to significant deviations and lower accuracy. To address this issue, this paper
proposes a TDOA-based sound source localization method using an Improved Dung Beetle
Optimizer (IDBO) algorithm. This method enhances the performance of the Dung Beetle
Optimizer (DBO) by employing strategies such as chaotic mapping, golden sine, and
adaptive tdistribution, and applies it to sound source localization. To evaluate
the performance of the IDBO, it is compared with DBO, Harris Hawk Optimizer (HHO),
Gray Wolf Optimizer (GWO), Bald Eagle Search (BES) algorithm, and Whale Optimization
Algorithm (WOA). The results showed that in solving benchmark functions and localization
models, it demonstrates faster convergence speed, higher localization accuracy,
and better stability.
layout: inproceedings
series: Proceedings of Machine Learning Research
publisher: PMLR
issn: 2640-3498
id: chunning24a
month: 0
tex_title: A Sound Source Location Method Based on Time Difference of Arrival with
Improved Dung Beetle Optimizer
firstpage: 165
lastpage: 176
page: 165-176
order: 165
cycles: false
bibtex_author: Chunning, Song and Jindong, Zhang
author:
- given: Song
family: Chunning
- given: Zhang
family: Jindong
date: 2024-07-29
address:
container-title: Proceedings of 2024 International Conference on Machine Learning
and Intelligent Computing
volume: '245'
genre: inproceedings
issued:
date-parts:
- 2024
- 7
- 29
pdf: https://raw.githubusercontent.com/mlresearch/v245/main/assets/chunning24a/chunning24a.pdf
extras: []
# Format based on Martin Fenner's citeproc: https://blog.front-matter.io/posts/citeproc-yaml-for-bibliographies/
---
52 changes: 52 additions & 0 deletions _posts/2024-07-29-cong24a.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: 'Research on Green Design Optimization of Ethnic Minority Architecture in Guangxi
Based on Machine Learning '
abstract: Guangxi Zhuang Autonomous Region, as one of China’s ethnic minority areas,
possesses rich heritage resources in ethnic architecture, with dual objectives of
cultural preservation and ecological sustainability. This paper explores the design
principles of ethnic minority architecture in Guangxi, investigates the relationship
between architecture and climate adaptability, and integrates them with digital
fabrication technology. Utilizing parametric platforms and performance simulation
tools, the study examines the climate adaptability of ethnic minority architecture
in Guangxi. Through machine learning, models for lighting, thermal, and humidity
environments specific to Guangxi’s ethnic minority regions are developed. Optimization
parameters for architectural design are proposed, and the reliability and accuracy
of the models are demonstrated through training and testing, providing ecological
design optimization strategies and references for future research on green architecture
in ethnic minority areas.
layout: inproceedings
series: Proceedings of Machine Learning Research
publisher: PMLR
issn: 2640-3498
id: cong24a
month: 0
tex_title: 'Research on Green Design Optimization of Ethnic Minority Architecture
in Guangxi Based on Machine Learning '
firstpage: 366
lastpage: 372
page: 366-372
order: 366
cycles: false
bibtex_author: Cong, Lu and Nenglang, Huang and Yang, Wu
author:
- given: Lu
family: Cong
- given: Huang
family: Nenglang
- given: Wu
family: Yang
date: 2024-07-29
address:
container-title: Proceedings of 2024 International Conference on Machine Learning
and Intelligent Computing
volume: '245'
genre: inproceedings
issued:
date-parts:
- 2024
- 7
- 29
pdf: https://raw.githubusercontent.com/mlresearch/v245/main/assets/cong24a/cong24a.pdf
extras: []
# Format based on Martin Fenner's citeproc: https://blog.front-matter.io/posts/citeproc-yaml-for-bibliographies/
---
49 changes: 49 additions & 0 deletions _posts/2024-07-29-fan24a.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
title: Federated Learning Algorithm based on Gaussi-an Local Differential Noise
abstract: In differential privacy-based federated learning, the data of different
clients are of-ten independently and identically distributed. During model training,
each client’s data will optimize and converge towards its own optimal direction,
causing a client drift phenomenon, resulting in a decrease in accuracy and making
it difficult to obtain the optimal global model. To address this issue, a federated
learning al-gorithm based on local differential privacy is proposed. Each client
is assigned its own control variable ci to control the model update direction, and
a global control variable c is set on the server side. The SCAFFOLD algorithm is
used to aggregate all client model parameters and control variables. During model
training, a correction term c-ci is added when updating parameters on the client
side, and the model training bias is adjusted according to the global control variable
obtained from the server side in the previous round, thereby controlling the model’s
iterative direction towards the global optimum. Experimental results on the CIFAR-10
datasets demonstrated the effectiveness of the new algorithm.
layout: inproceedings
series: Proceedings of Machine Learning Research
publisher: PMLR
issn: 2640-3498
id: fan24a
month: 0
tex_title: Federated Learning Algorithm based on Gaussi-an Local Differential Noise
firstpage: 325
lastpage: 339
page: 325-339
order: 325
cycles: false
bibtex_author: Fan, Wu and Maoting, Gao
author:
- given: Wu
family: Fan
- given: Gao
family: Maoting
date: 2024-07-29
address:
container-title: Proceedings of 2024 International Conference on Machine Learning
and Intelligent Computing
volume: '245'
genre: inproceedings
issued:
date-parts:
- 2024
- 7
- 29
pdf: https://raw.githubusercontent.com/mlresearch/v245/main/assets/fan24a/fan24a.pdf
extras: []
# Format based on Martin Fenner's citeproc: https://blog.front-matter.io/posts/citeproc-yaml-for-bibliographies/
---
57 changes: 57 additions & 0 deletions _posts/2024-07-29-fan24b.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
---
title: Decentralized Federated Learning Algorithm Based on Federated Groups and Secure
Multiparty Computation
abstract: To solve the problem that the centralized federal learning based on privacy
protection relies on trusted central servers, has low resistance to malicious attacks,
and is prone to privacy leakage, this paper proposes a decentralized federated learning
algorithm based on federated groups and secure multiparty computation. By establishing
a federated group mechanism based on model relevance, each client has its own federated
group, and model parameters are only transmitted among fed- erated group members,
members outside the group are unable to access parameter information. Secret owners
utilize secret sharing algorithms to split their model parameters into several secret
shares, which are then transmitted to federated group members through secure channels.
Federated group members then aggregate all transmitted secret shares by weighted
averaging, and the secret owner receives the aggregated secret shares passed back
from all federated group members, and then uses the secret recovery algorithms to
recover secret, and obtains the updated parameter model. In the federated group,
while a member becomes a Byzantine node, it is removed from the federated group,
and another client is selected to join the group based on model relevance. So, each
client participating in federated learning serves as both a data node and a computing
node, federated learning eliminates reliance on servers and achieves decentralization.
The good privacy performance of the proposed algorithm model is theoretically analyzed,
and experiments on the FedML platform demonstrated that the algorithm has stronger
resistance to attacks.
layout: inproceedings
series: Proceedings of Machine Learning Research
publisher: PMLR
issn: 2640-3498
id: fan24b
month: 0
tex_title: Decentralized Federated Learning Algorithm Based on Federated Groups and
Secure Multiparty Computation
firstpage: 340
lastpage: 348
page: 340-348
order: 340
cycles: false
bibtex_author: Fan, Wu and Maoting, Gao
author:
- given: Wu
family: Fan
- given: Gao
family: Maoting
date: 2024-07-29
address:
container-title: Proceedings of 2024 International Conference on Machine Learning
and Intelligent Computing
volume: '245'
genre: inproceedings
issued:
date-parts:
- 2024
- 7
- 29
pdf: https://raw.githubusercontent.com/mlresearch/v245/main/assets/fan24b/fan24b.pdf
extras: []
# Format based on Martin Fenner's citeproc: https://blog.front-matter.io/posts/citeproc-yaml-for-bibliographies/
---
Loading

0 comments on commit a4182e1

Please sign in to comment.