当前位置: 首页 > news >正文

INT301 Bio Computation 题型整理

perceptron 设计和计算

1.

XOR: 当两个输入值中只有一个为真时,输出为真

2. 

 

3. 

5.

6.

7.

2^3

2^n

9.

a) 直接test

b) 把v≥2 改成 v≥1

10.

no, because it can't be separate through only one decision boundary,it's not linearlly separable. 

Backpropagation

1.

2. 

3.

4.

这个是tutorial里面的,考试估计不会考这个复杂

5.

6.

Hopfield Network

Hopfield 网络的目标是存储一组模式,并且在给定部分模式或损坏模式的情况下,通过网络的动态演化过程恢复出原始模式。

step

1) calculate the weight matrix

总之就是pattern乘以自己的转置,累加,最后减去pattern个数的单位矩阵

2)calculate input

将特定node的权重乘乘以输入,但是顺序不同可能导致结果不同,更新X哪一个index就乘以对应的weight index

3)update weight

得到的结果计算一下

1.

 

这里用第二行的weight

2.

3.

随便找两个pattern 然后 测试就行

5.

这题我怀疑老师写的有点问题先搁一下

 

至于为什么例题中只选了node2和4,我猜是因为node1,3 和node2一样所以就只说2了

6.

 

7.

8.

9.

10.

The neural network model used is an auto-encoder, which is trained to compress and reconstruct data. It learns to identify the key features of the input data (e.g., vehicles) by encoding them into a lower-dimensional representation and then decoding them back.

Explanation:

  1. Model Description:

    • The encoder compresses the vehicle images into a smaller set of features.
    • The decoder reconstructs the images from these features.
    • The visualized weights show the important patterns (e.g., shapes of vehicles) learned by the encoder.
  2. Possible Uses:

    • Feature Extraction: These learned features can help identify vehicle types or shapes.
    • Data Compression: The model reduces the data size for storage or processing.
    • Anomaly Detection: By comparing reconstructed images to the original, unusual vehicles can be detected.
    • Surveillance: It helps in organizing and understanding vehicle data efficiently.

Application

1.

I'll use a self-organizing map for clusting since the number of topics are not known. In the preprocessing step I may use 

  • Preprocess the Text: Clean and tokenize the documents. You may also lemmatize words and remove stopwords.

  • Feature Representation: Convert the documents into numerical vectors. You can use:

    • TF-IDF for a sparse representation based on word frequency.
    • Embeddings (e.g., BERT, Word2Vec) for a dense, semantic representation.
  • Apply Clustering:

    • Use K-means if you have a rough idea of the number of topics.(事实上好像确实可以自己定)
    • som(GPT不推荐)
  • Evaluate Clusters: Use metrics like Silhouette Score to assess how well documents are grouped.

  • Interpret the Results: Examine the top terms or document samples in each cluster to understand the topics.

2.supervised/unsuervised 挑选以及实现

 

CNN

  • Best for: Spatial feature extraction from EEG signals, useful when analyzing data from multiple channels.
  • Advantage: CNNs can automatically capture important spatial patterns in EEG signals, making them effective for tasks like epilepsy detection or sleep stage classification.

I'll think of clustering method such as kmeans would be helpful

3.supervised/unsuervised 挑选以及实现

Since the task is non-linear, Radial-Basis Function Network, 

Self-Organizing Map

4.

 

5. 

self-organizing map Network

1.

 

2.

A Self-Organizing Map (SOM) is particularly well-suited for tasks where the goal is to perform unsupervised clustering or data visualization in a lower-dimensional space, while preserving the topological relationships of the input data.

Gaussian RBF network

1.

 

 

Matlab code

view(net) --  Visualize the architecture of the trained neural network

1. self-organizing map

2. 

 

3.

conception 

1.fully-connected neural network vs convolutional neural network

  • Inefficient Parameter Use: FCNs have a large number of parameters, which can lead to overfitting, higher memory usage, and slower training.
  • Loss of Spatial Structure: FCNs do not preserve spatial hierarchies in data like images, making them less suited for image-related tasks.
  • Scaling Issues with Large Inputs: FCNs struggle with large input sizes (e.g., high-resolution images) because they require flattening, which destroys spatial information.
  • Computationally Expensive: FCNs become computationally expensive with large datasets and deep architectures.
  • No Translation Invariance: FCNs do not inherently recognize translation invariance in data, making them less effective for tasks like image recognition.

2. rectified linear units vs sigmoid activation function

The benefits of faster convergence, avoiding the vanishing gradient problem, and computational efficiency make ReLU a more effective and scalable choice for modern deep learning models.

3. transfer function in mlp?

Omitting activation functions reduces the MLP to a simple linear model, which lacks the ability to handle nonlinear problems. Although it simplifies computation, it’s rarely used because it greatly limits the network's performance.

 c overfitting 解释

Overfitting happens when a model fits too closely to the training data, losing the ability to generalize. It is most likely to occur with small datasets, complex models, and noisy data. To mitigate overfitting, use regularization, increase data, or employ simpler models. Monitoring validation performance is key to identifying it.

d backpropagation 解释

  • Poor weight initialization.
  • Vanishing or exploding gradients.
  • Improper learning rate.
  • Insufficient training time.

e Oja rule in Hebb rule

 c. 解释 RBF-Units

RBF networks adjust centroids, widths, and output weights during learning. Their localized response, faster training, and better handling of non-linearities make them advantageous over sigmoidal units for tasks like function approximation, classification.

d. 解释 Elman Network 

The Elman Network is a recurrent neural network with an additional context layer that stores the previous hidden state, enabling it to capture temporal dependencies. It processes sequential data through its input, hidden, and output layers, with training performed via Backpropagation Through Time (BPTT).

In time-series prediction, the network uses past observations to predict future values by learning patterns over time. Once trained, it predicts iteratively by feeding its output back as input, making it effective for tasks like forecasting or trend analysis.

 a. SOM, 高维网络降维(?)

CNN

1. 

2.

 

http://www.lryc.cn/news/517840.html

相关文章:

  • 机器学习免费使用的数据集及网站链接
  • 低空经济——飞行汽车运营建模求解问题思路
  • 英伟达Project Digits赋能医疗大模型:创新应用与未来展望
  • 【Python3】异步操作 redis
  • 【W800】UART 的使用与问题
  • UART串口数据分析
  • NFS 组件容器化部署实战指南
  • 嵌入式软件C语言面试常见问题及答案解析(三)
  • nvm安装教程
  • 单片机-定时器中断
  • Hadoop 实战笔记(一) -- Windows 安装 Hadoop 3.x
  • AI中的神经元与权重矩阵之间的关系;神经元连接角度看行和列的意义
  • mysql、postgresql、druid链接池踩坑记录
  • NRF24L01模块STM32通信-通信初始化
  • 高比例压缩:Linux 中的压缩命令与技巧
  • LabVIEW软件Bug的定义与修改
  • 基于Springboot + vue实现的办公用品管理系统
  • B+树的原理及实现
  • (四)结合代码初步理解帧缓存(Frame Buffer)概念
  • python注意事项:range遍历越索引现象、列表边遍历边修改出现的问题
  • 【C++】模板与泛型编程(三):重载与模板
  • JavaScript字符串拓展:实用方法与示例全解析
  • 基于html5实现音乐录音播放动画源码
  • 初学stm32 --- ADC模拟/数字转换器工作原理
  • 导航技术的分类
  • C++语言的函数实现
  • 每日一题-两个链表的第一个公共结点
  • 细说STM32F407单片机以轮询方式读写外部SRAM的方法
  • 【3】安装cyclictest和iperf
  • C语言将点分十进制的IP字符串转成4个整数